Test Report: Docker_Linux_docker_arm64 19667

                    
                      39f19baf3a7e1c810682dda0eb22abd909c6f2ab:2024-09-18:36273
                    
                

Test fail (3/343)

Order failed test Duration
33 TestAddons/parallel/Registry 75.95
133 TestFunctional/parallel/MountCmd/specific-port 12.9
310 TestStartStop/group/old-k8s-version/serial/SecondStart 374.04
x
+
TestAddons/parallel/Registry (75.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.782635ms
I0918 19:50:36.455832    7565 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-m9pdd" [be6aeece-e555-4628-88de-f374e1e78aa3] Running
I0918 19:50:36.463878    7565 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0918 19:50:36.463997    7565 kapi.go:107] duration metric: took 11.219991ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004205442s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rxskq" [e2a2228e-559d-447a-953c-77300e373ad5] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004907319s
addons_test.go:342: (dbg) Run:  kubectl --context addons-923322 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-923322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-923322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.129814598s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-923322 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 ip
2024/09/18 19:51:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-linux-arm64 -p addons-923322 addons disable registry --alsologtostderr -v=1: (1.179674417s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-923322
helpers_test.go:235: (dbg) docker inspect addons-923322:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679",
	        "Created": "2024-09-18T19:38:35.634748954Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8813,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-18T19:38:35.805208547Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679/hostname",
	        "HostsPath": "/var/lib/docker/containers/b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679/hosts",
	        "LogPath": "/var/lib/docker/containers/b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679/b38fecd59f11aa0c3e537fbf8e458dffb98c5727f8744804404a85de07aa7679-json.log",
	        "Name": "/addons-923322",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-923322:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-923322",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cf86b036273ed52eca3dc069fc8c9d19256c0480fe0c2e4433246a28fcbd68a3-init/diff:/var/lib/docker/overlay2/2d5f4db6bef4f73456b3d6729836bc99a064b2dff1ec273e613fe21fbf6cf84d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf86b036273ed52eca3dc069fc8c9d19256c0480fe0c2e4433246a28fcbd68a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf86b036273ed52eca3dc069fc8c9d19256c0480fe0c2e4433246a28fcbd68a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf86b036273ed52eca3dc069fc8c9d19256c0480fe0c2e4433246a28fcbd68a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-923322",
	                "Source": "/var/lib/docker/volumes/addons-923322/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-923322",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-923322",
	                "name.minikube.sigs.k8s.io": "addons-923322",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c5c1622f5e172c825854ba80ad33abf3a0c4099418ab8a0bcc30e9f90fbcb52d",
	            "SandboxKey": "/var/run/docker/netns/c5c1622f5e17",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-923322": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "07fa96b9e48eadd9fc9febbf6a977a0a660ba2cb85d425369ac66bc0a9c06077",
	                    "EndpointID": "b6c0d38c2dcfb3232290e66faef2473565391a3c14d0c37e67380fcbcf4cf7e8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-923322",
	                        "b38fecd59f11"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-923322 -n addons-923322
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-923322 logs -n 25: (1.766465621s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-843008   | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |                     |
	|         | -p download-only-843008              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-843008              | download-only-843008   | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | -o=json --download-only              | download-only-593891   | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | -p download-only-593891              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-593891              | download-only-593891   | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-843008              | download-only-843008   | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-593891              | download-only-593891   | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | --download-only -p                   | download-docker-404631 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | download-docker-404631               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-404631            | download-docker-404631 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | --download-only -p                   | binary-mirror-976038   | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | binary-mirror-976038                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41665               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-976038              | binary-mirror-976038   | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| addons  | enable dashboard -p                  | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-923322                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | addons-923322                        |                        |         |         |                     |                     |
	| start   | -p addons-923322 --wait=true         | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:41 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-923322 addons disable         | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:42 UTC | 18 Sep 24 19:42 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-923322 addons                 | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-923322 addons                 | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-923322 addons                 | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | addons-923322                        |                        |         |         |                     |                     |
	| ssh     | addons-923322 ssh curl -s            | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-923322 ip                     | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	| addons  | addons-923322 addons disable         | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-923322 ip                     | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC | 18 Sep 24 19:51 UTC |
	| addons  | addons-923322 addons disable         | addons-923322          | jenkins | v1.34.0 | 18 Sep 24 19:51 UTC |                     |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:38:11
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:38:11.208338    8317 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:38:11.208497    8317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:11.208509    8317 out.go:358] Setting ErrFile to fd 2...
	I0918 19:38:11.208514    8317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:11.208759    8317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
	I0918 19:38:11.209222    8317 out.go:352] Setting JSON to false
	I0918 19:38:11.209948    8317 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1239,"bootTime":1726687053,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0918 19:38:11.210017    8317 start.go:139] virtualization:  
	I0918 19:38:11.211668    8317 out.go:177] * [addons-923322] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0918 19:38:11.213230    8317 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:38:11.213404    8317 notify.go:220] Checking for updates...
	I0918 19:38:11.215898    8317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:11.217315    8317 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	I0918 19:38:11.219026    8317 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	I0918 19:38:11.220367    8317 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 19:38:11.221558    8317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:38:11.223042    8317 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:38:11.245017    8317 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 19:38:11.245149    8317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:38:11.307127    8317 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-18 19:38:11.297373101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 19:38:11.307295    8317 docker.go:318] overlay module found
	I0918 19:38:11.308662    8317 out.go:177] * Using the docker driver based on user configuration
	I0918 19:38:11.309757    8317 start.go:297] selected driver: docker
	I0918 19:38:11.309770    8317 start.go:901] validating driver "docker" against <nil>
	I0918 19:38:11.309783    8317 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:38:11.310384    8317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:38:11.366839    8317 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-18 19:38:11.355172228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 19:38:11.367038    8317 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:38:11.367307    8317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:38:11.368740    8317 out.go:177] * Using Docker driver with root privileges
	I0918 19:38:11.370000    8317 cni.go:84] Creating CNI manager for ""
	I0918 19:38:11.370080    8317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:38:11.370095    8317 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 19:38:11.370177    8317 start.go:340] cluster config:
	{Name:addons-923322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-923322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:11.371712    8317 out.go:177] * Starting "addons-923322" primary control-plane node in "addons-923322" cluster
	I0918 19:38:11.372986    8317 cache.go:121] Beginning downloading kic base image for docker with docker
	I0918 19:38:11.374235    8317 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0918 19:38:11.375404    8317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:11.375466    8317 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 19:38:11.375479    8317 cache.go:56] Caching tarball of preloaded images
	I0918 19:38:11.375492    8317 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0918 19:38:11.375556    8317 preload.go:172] Found /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 19:38:11.375566    8317 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 19:38:11.375910    8317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/config.json ...
	I0918 19:38:11.375938    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/config.json: {Name:mk413e862c8527b15a3dc7cd54f06f1891ae5447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:11.391452    8317 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 19:38:11.391585    8317 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 19:38:11.391620    8317 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0918 19:38:11.391625    8317 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0918 19:38:11.391633    8317 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0918 19:38:11.391639    8317 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0918 19:38:28.909573    8317 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0918 19:38:28.909612    8317 cache.go:194] Successfully downloaded all kic artifacts
	I0918 19:38:28.909658    8317 start.go:360] acquireMachinesLock for addons-923322: {Name:mk40670ccc3fb08a13df272a775834621a889ecb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:38:28.909812    8317 start.go:364] duration metric: took 125.379µs to acquireMachinesLock for "addons-923322"
	I0918 19:38:28.909854    8317 start.go:93] Provisioning new machine with config: &{Name:addons-923322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-923322 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 19:38:28.909956    8317 start.go:125] createHost starting for "" (driver="docker")
	I0918 19:38:28.912711    8317 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0918 19:38:28.912989    8317 start.go:159] libmachine.API.Create for "addons-923322" (driver="docker")
	I0918 19:38:28.913027    8317 client.go:168] LocalClient.Create starting
	I0918 19:38:28.913167    8317 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem
	I0918 19:38:29.254436    8317 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem
	I0918 19:38:29.536671    8317 cli_runner.go:164] Run: docker network inspect addons-923322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0918 19:38:29.562202    8317 cli_runner.go:211] docker network inspect addons-923322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0918 19:38:29.562296    8317 network_create.go:284] running [docker network inspect addons-923322] to gather additional debugging logs...
	I0918 19:38:29.562321    8317 cli_runner.go:164] Run: docker network inspect addons-923322
	W0918 19:38:29.577352    8317 cli_runner.go:211] docker network inspect addons-923322 returned with exit code 1
	I0918 19:38:29.577387    8317 network_create.go:287] error running [docker network inspect addons-923322]: docker network inspect addons-923322: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-923322 not found
	I0918 19:38:29.577400    8317 network_create.go:289] output of [docker network inspect addons-923322]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-923322 not found
	
	** /stderr **
	I0918 19:38:29.577500    8317 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:38:29.595996    8317 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b3c840}
	I0918 19:38:29.596044    8317 network_create.go:124] attempt to create docker network addons-923322 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0918 19:38:29.596103    8317 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-923322 addons-923322
	I0918 19:38:29.660537    8317 network_create.go:108] docker network addons-923322 192.168.49.0/24 created
	I0918 19:38:29.660568    8317 kic.go:121] calculated static IP "192.168.49.2" for the "addons-923322" container
	I0918 19:38:29.660640    8317 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0918 19:38:29.678309    8317 cli_runner.go:164] Run: docker volume create addons-923322 --label name.minikube.sigs.k8s.io=addons-923322 --label created_by.minikube.sigs.k8s.io=true
	I0918 19:38:29.695271    8317 oci.go:103] Successfully created a docker volume addons-923322
	I0918 19:38:29.695364    8317 cli_runner.go:164] Run: docker run --rm --name addons-923322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-923322 --entrypoint /usr/bin/test -v addons-923322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0918 19:38:31.809829    8317 cli_runner.go:217] Completed: docker run --rm --name addons-923322-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-923322 --entrypoint /usr/bin/test -v addons-923322:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (2.114405088s)
	I0918 19:38:31.809858    8317 oci.go:107] Successfully prepared a docker volume addons-923322
	I0918 19:38:31.809881    8317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:31.809901    8317 kic.go:194] Starting extracting preloaded images to volume ...
	I0918 19:38:31.809969    8317 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-923322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0918 19:38:35.561012    8317 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-923322:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.751000338s)
	I0918 19:38:35.561042    8317 kic.go:203] duration metric: took 3.751138796s to extract preloaded images to volume ...
	W0918 19:38:35.561188    8317 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0918 19:38:35.561302    8317 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0918 19:38:35.618595    8317 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-923322 --name addons-923322 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-923322 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-923322 --network addons-923322 --ip 192.168.49.2 --volume addons-923322:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0918 19:38:35.987353    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Running}}
	I0918 19:38:36.008442    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:38:36.047830    8317 cli_runner.go:164] Run: docker exec addons-923322 stat /var/lib/dpkg/alternatives/iptables
	I0918 19:38:36.128869    8317 oci.go:144] the created container "addons-923322" has a running status.
	I0918 19:38:36.128907    8317 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa...
	I0918 19:38:36.435484    8317 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0918 19:38:36.477363    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:38:36.503978    8317 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0918 19:38:36.504016    8317 kic_runner.go:114] Args: [docker exec --privileged addons-923322 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0918 19:38:36.597104    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:38:36.624222    8317 machine.go:93] provisionDockerMachine start ...
	I0918 19:38:36.624313    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:38:36.649282    8317 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:36.649536    8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:36.649553    8317 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 19:38:36.839880    8317 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-923322
	
	I0918 19:38:36.839908    8317 ubuntu.go:169] provisioning hostname "addons-923322"
	I0918 19:38:36.839975    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:38:36.860426    8317 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:36.860662    8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:36.860674    8317 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-923322 && echo "addons-923322" | sudo tee /etc/hostname
	I0918 19:38:37.032003    8317 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-923322
	
	I0918 19:38:37.032116    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:38:37.055902    8317 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:37.056163    8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:37.056179    8317 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-923322' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-923322/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-923322' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:38:37.211677    8317 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:38:37.211714    8317 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19667-2236/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-2236/.minikube}
	I0918 19:38:37.211748    8317 ubuntu.go:177] setting up certificates
	I0918 19:38:37.211767    8317 provision.go:84] configureAuth start
	I0918 19:38:37.211851    8317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-923322
	I0918 19:38:37.229681    8317 provision.go:143] copyHostCerts
	I0918 19:38:37.229768    8317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/ca.pem (1078 bytes)
	I0918 19:38:37.229897    8317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/cert.pem (1123 bytes)
	I0918 19:38:37.229958    8317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/key.pem (1675 bytes)
	I0918 19:38:37.230008    8317 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem org=jenkins.addons-923322 san=[127.0.0.1 192.168.49.2 addons-923322 localhost minikube]
	I0918 19:38:37.631268    8317 provision.go:177] copyRemoteCerts
	I0918 19:38:37.631333    8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:38:37.631383    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:38:37.648811    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:38:37.752152    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0918 19:38:37.779201    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0918 19:38:37.804230    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 19:38:37.828720    8317 provision.go:87] duration metric: took 616.927396ms to configureAuth
	I0918 19:38:37.828755    8317 ubuntu.go:193] setting minikube options for container-runtime
	I0918 19:38:37.828986    8317 config.go:182] Loaded profile config "addons-923322": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:38:37.829051    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:38:37.847331    8317 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:37.847580    8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:37.847599    8317 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 19:38:37.991652    8317 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0918 19:38:37.991672    8317 ubuntu.go:71] root file system type: overlay
	I0918 19:38:37.991777    8317 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 19:38:37.991849    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:38:38.010817    8317 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:38.011074    8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:38.011157    8317 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 19:38:38.175750    8317 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 19:38:38.175835    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:38:38.193566    8317 main.go:141] libmachine: Using SSH client type: native
	I0918 19:38:38.193825    8317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0918 19:38:38.193850    8317 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 19:38:38.975692    8317 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-18 19:38:38.169317622 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0918 19:38:38.975754    8317 machine.go:96] duration metric: took 2.351509928s to provisionDockerMachine
	I0918 19:38:38.975764    8317 client.go:171] duration metric: took 10.062728262s to LocalClient.Create
	I0918 19:38:38.975776    8317 start.go:167] duration metric: took 10.062788872s to libmachine.API.Create "addons-923322"
	I0918 19:38:38.975784    8317 start.go:293] postStartSetup for "addons-923322" (driver="docker")
	I0918 19:38:38.975801    8317 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:38:38.975867    8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:38:38.975912    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:38:38.994366    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:38:39.096834    8317 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:38:39.100458    8317 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 19:38:39.100495    8317 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 19:38:39.100508    8317 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 19:38:39.100518    8317 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0918 19:38:39.100529    8317 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-2236/.minikube/addons for local assets ...
	I0918 19:38:39.100607    8317 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-2236/.minikube/files for local assets ...
	I0918 19:38:39.100632    8317 start.go:296] duration metric: took 124.836236ms for postStartSetup
	I0918 19:38:39.100962    8317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-923322
	I0918 19:38:39.118348    8317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/config.json ...
	I0918 19:38:39.118643    8317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:38:39.118696    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:38:39.135848    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:38:39.232161    8317 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 19:38:39.236803    8317 start.go:128] duration metric: took 10.326802801s to createHost
	I0918 19:38:39.236826    8317 start.go:83] releasing machines lock for "addons-923322", held for 10.32699624s
	I0918 19:38:39.236905    8317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-923322
	I0918 19:38:39.253561    8317 ssh_runner.go:195] Run: cat /version.json
	I0918 19:38:39.253623    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:38:39.253919    8317 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:38:39.253995    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:38:39.279652    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:38:39.282987    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:38:39.505838    8317 ssh_runner.go:195] Run: systemctl --version
	I0918 19:38:39.510159    8317 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 19:38:39.514664    8317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0918 19:38:39.540493    8317 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0918 19:38:39.540572    8317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:38:39.567367    8317 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0918 19:38:39.567434    8317 start.go:495] detecting cgroup driver to use...
	I0918 19:38:39.567474    8317 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 19:38:39.567583    8317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:38:39.584583    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0918 19:38:39.594241    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 19:38:39.603947    8317 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 19:38:39.604014    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 19:38:39.614009    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 19:38:39.624646    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 19:38:39.635418    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 19:38:39.645378    8317 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 19:38:39.654937    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 19:38:39.665028    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0918 19:38:39.675343    8317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0918 19:38:39.685561    8317 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 19:38:39.694793    8317 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 19:38:39.704015    8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:39.793143    8317 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 19:38:39.890442    8317 start.go:495] detecting cgroup driver to use...
	I0918 19:38:39.890545    8317 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 19:38:39.890638    8317 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 19:38:39.915605    8317 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0918 19:38:39.915674    8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 19:38:39.931609    8317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:38:39.951007    8317 ssh_runner.go:195] Run: which cri-dockerd
	I0918 19:38:39.956315    8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 19:38:39.968745    8317 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0918 19:38:39.990865    8317 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 19:38:40.152272    8317 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 19:38:40.253078    8317 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 19:38:40.253208    8317 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0918 19:38:40.276703    8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:40.363628    8317 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 19:38:40.623283    8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0918 19:38:40.636696    8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 19:38:40.649690    8317 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 19:38:40.742463    8317 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 19:38:40.823448    8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:40.903407    8317 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 19:38:40.918161    8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 19:38:40.930434    8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:41.025618    8317 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0918 19:38:41.096182    8317 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 19:38:41.096328    8317 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 19:38:41.101768    8317 start.go:563] Will wait 60s for crictl version
	I0918 19:38:41.101884    8317 ssh_runner.go:195] Run: which crictl
	I0918 19:38:41.106019    8317 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 19:38:41.145130    8317 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0918 19:38:41.145242    8317 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 19:38:41.168096    8317 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 19:38:41.196847    8317 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0918 19:38:41.196945    8317 cli_runner.go:164] Run: docker network inspect addons-923322 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:38:41.213577    8317 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0918 19:38:41.218348    8317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:38:41.231368    8317 kubeadm.go:883] updating cluster {Name:addons-923322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-923322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 19:38:41.231498    8317 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:41.231557    8317 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 19:38:41.250623    8317 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 19:38:41.250646    8317 docker.go:615] Images already preloaded, skipping extraction
	I0918 19:38:41.250733    8317 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 19:38:41.269488    8317 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 19:38:41.269509    8317 cache_images.go:84] Images are preloaded, skipping loading
	I0918 19:38:41.269518    8317 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0918 19:38:41.269609    8317 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-923322 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-923322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 19:38:41.269686    8317 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 19:38:41.315345    8317 cni.go:84] Creating CNI manager for ""
	I0918 19:38:41.315424    8317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:38:41.315441    8317 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 19:38:41.315465    8317 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-923322 NodeName:addons-923322 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 19:38:41.315638    8317 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-923322"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 19:38:41.315713    8317 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 19:38:41.324486    8317 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 19:38:41.324585    8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 19:38:41.333710    8317 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0918 19:38:41.351973    8317 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 19:38:41.370351    8317 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0918 19:38:41.389698    8317 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0918 19:38:41.393323    8317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:38:41.404705    8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:38:41.499911    8317 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:38:41.515332    8317 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322 for IP: 192.168.49.2
	I0918 19:38:41.515397    8317 certs.go:194] generating shared ca certs ...
	I0918 19:38:41.515430    8317 certs.go:226] acquiring lock for ca certs: {Name:mk958e02b356056556309ee300f2f34fdfb18284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:41.515594    8317 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key
	I0918 19:38:41.935140    8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt ...
	I0918 19:38:41.935173    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt: {Name:mkf111cf3b15e82ccb3baf57879afd2414af0c38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:41.935394    8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key ...
	I0918 19:38:41.935408    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key: {Name:mk477b03db8b73097773933aed42528067072d14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:41.935501    8317 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key
	I0918 19:38:42.191991    8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.crt ...
	I0918 19:38:42.192028    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.crt: {Name:mk9f33625027085912b668e637f81c0e9aeb9347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:42.192241    8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key ...
	I0918 19:38:42.192255    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key: {Name:mk990ac1af6211151bb505f89d8555cf1e9130ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:42.192339    8317 certs.go:256] generating profile certs ...
	I0918 19:38:42.192404    8317 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.key
	I0918 19:38:42.192433    8317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt with IP's: []
	I0918 19:38:42.515618    8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt ...
	I0918 19:38:42.515645    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: {Name:mkaa8b50d1d5114bb4732284de066e243de0dca8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:42.515835    8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.key ...
	I0918 19:38:42.515849    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.key: {Name:mkfcfdbdc2b8cd7ffc00401710f1d36e0fb59a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:42.515926    8317 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key.37f9a37b
	I0918 19:38:42.515954    8317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt.37f9a37b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0918 19:38:43.650468    8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt.37f9a37b ...
	I0918 19:38:43.650502    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt.37f9a37b: {Name:mkc592eaf84c2356572fc618c3e4bc7ff514809b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:43.650683    8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key.37f9a37b ...
	I0918 19:38:43.650697    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key.37f9a37b: {Name:mka6411585c3e092cf2a25636b75af98e4295e26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:43.650780    8317 certs.go:381] copying /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt.37f9a37b -> /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt
	I0918 19:38:43.650863    8317 certs.go:385] copying /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key.37f9a37b -> /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key
	I0918 19:38:43.650917    8317 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.key
	I0918 19:38:43.650936    8317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.crt with IP's: []
	I0918 19:38:44.488973    8317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.crt ...
	I0918 19:38:44.489009    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.crt: {Name:mkd8eaa979655bcdecba0f9ea6e35c568f3aa35a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:44.489209    8317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.key ...
	I0918 19:38:44.489222    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.key: {Name:mk381bf7a929d54726eff6684a7b7e9eeee5a02b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:44.489413    8317 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 19:38:44.489456    8317 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem (1078 bytes)
	I0918 19:38:44.489486    8317 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem (1123 bytes)
	I0918 19:38:44.489516    8317 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem (1675 bytes)
	I0918 19:38:44.490120    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 19:38:44.513973    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 19:38:44.539034    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 19:38:44.564127    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 19:38:44.589395    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0918 19:38:44.616442    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 19:38:44.644314    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 19:38:44.671032    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 19:38:44.697206    8317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 19:38:44.722045    8317 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 19:38:44.741487    8317 ssh_runner.go:195] Run: openssl version
	I0918 19:38:44.747137    8317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 19:38:44.757500    8317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:44.761196    8317 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:44.761301    8317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:38:44.768893    8317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 19:38:44.778270    8317 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 19:38:44.781727    8317 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 19:38:44.781776    8317 kubeadm.go:392] StartCluster: {Name:addons-923322 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-923322 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:44.781909    8317 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 19:38:44.798609    8317 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 19:38:44.807449    8317 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 19:38:44.816582    8317 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0918 19:38:44.816652    8317 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 19:38:44.826554    8317 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 19:38:44.826620    8317 kubeadm.go:157] found existing configuration files:
	
	I0918 19:38:44.826691    8317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 19:38:44.835914    8317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 19:38:44.835983    8317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 19:38:44.845006    8317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 19:38:44.854008    8317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 19:38:44.854097    8317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 19:38:44.862975    8317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 19:38:44.872704    8317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 19:38:44.872818    8317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 19:38:44.881473    8317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 19:38:44.890394    8317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 19:38:44.890497    8317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 19:38:44.899085    8317 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0918 19:38:44.943015    8317 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 19:38:44.943104    8317 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 19:38:44.968654    8317 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0918 19:38:44.968839    8317 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0918 19:38:44.968916    8317 kubeadm.go:310] OS: Linux
	I0918 19:38:44.968992    8317 kubeadm.go:310] CGROUPS_CPU: enabled
	I0918 19:38:44.969070    8317 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0918 19:38:44.969147    8317 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0918 19:38:44.969232    8317 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0918 19:38:44.969310    8317 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0918 19:38:44.969400    8317 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0918 19:38:44.969479    8317 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0918 19:38:44.969566    8317 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0918 19:38:44.969642    8317 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0918 19:38:45.082279    8317 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 19:38:45.082425    8317 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 19:38:45.082532    8317 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 19:38:45.106095    8317 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 19:38:45.110749    8317 out.go:235]   - Generating certificates and keys ...
	I0918 19:38:45.111012    8317 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 19:38:45.111145    8317 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 19:38:45.481358    8317 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 19:38:46.038653    8317 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 19:38:46.455237    8317 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 19:38:46.906953    8317 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 19:38:47.540293    8317 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 19:38:47.540595    8317 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-923322 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 19:38:48.147649    8317 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 19:38:48.147874    8317 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-923322 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 19:38:48.465599    8317 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 19:38:48.678189    8317 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 19:38:49.270631    8317 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 19:38:49.270849    8317 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 19:38:49.555442    8317 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 19:38:50.420523    8317 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 19:38:51.171657    8317 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 19:38:51.726296    8317 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 19:38:52.169784    8317 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 19:38:52.170816    8317 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 19:38:52.174160    8317 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 19:38:52.176108    8317 out.go:235]   - Booting up control plane ...
	I0918 19:38:52.176212    8317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 19:38:52.176289    8317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 19:38:52.177645    8317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 19:38:52.190406    8317 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 19:38:52.196953    8317 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 19:38:52.197009    8317 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 19:38:52.299744    8317 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 19:38:52.299864    8317 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 19:38:53.296316    8317 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001652251s
	I0918 19:38:53.296437    8317 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0918 19:38:59.297834    8317 kubeadm.go:310] [api-check] The API server is healthy after 6.001657206s
	I0918 19:38:59.323589    8317 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 19:38:59.344013    8317 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 19:38:59.378343    8317 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 19:38:59.378785    8317 kubeadm.go:310] [mark-control-plane] Marking the node addons-923322 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 19:38:59.393303    8317 kubeadm.go:310] [bootstrap-token] Using token: 96pzjz.thy6lyeyktx1vx9a
	I0918 19:38:59.396113    8317 out.go:235]   - Configuring RBAC rules ...
	I0918 19:38:59.396244    8317 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 19:38:59.405658    8317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 19:38:59.413935    8317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 19:38:59.420442    8317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 19:38:59.425820    8317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 19:38:59.430123    8317 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 19:38:59.705030    8317 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 19:39:00.188651    8317 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0918 19:39:00.704101    8317 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0918 19:39:00.705355    8317 kubeadm.go:310] 
	I0918 19:39:00.705430    8317 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0918 19:39:00.705436    8317 kubeadm.go:310] 
	I0918 19:39:00.705527    8317 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0918 19:39:00.705544    8317 kubeadm.go:310] 
	I0918 19:39:00.705570    8317 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0918 19:39:00.705634    8317 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 19:39:00.705689    8317 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 19:39:00.705698    8317 kubeadm.go:310] 
	I0918 19:39:00.705752    8317 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0918 19:39:00.705760    8317 kubeadm.go:310] 
	I0918 19:39:00.705813    8317 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 19:39:00.705821    8317 kubeadm.go:310] 
	I0918 19:39:00.705873    8317 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0918 19:39:00.705952    8317 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 19:39:00.706025    8317 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 19:39:00.706034    8317 kubeadm.go:310] 
	I0918 19:39:00.706119    8317 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 19:39:00.706203    8317 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0918 19:39:00.706212    8317 kubeadm.go:310] 
	I0918 19:39:00.706297    8317 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 96pzjz.thy6lyeyktx1vx9a \
	I0918 19:39:00.706404    8317 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9eecf3dbed3b3dd0d2c4f53b9183d7bca1cdee4ca3fecbf261d3f759ffc8a8d8 \
	I0918 19:39:00.706428    8317 kubeadm.go:310] 	--control-plane 
	I0918 19:39:00.706436    8317 kubeadm.go:310] 
	I0918 19:39:00.706531    8317 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0918 19:39:00.706541    8317 kubeadm.go:310] 
	I0918 19:39:00.706624    8317 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 96pzjz.thy6lyeyktx1vx9a \
	I0918 19:39:00.706732    8317 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9eecf3dbed3b3dd0d2c4f53b9183d7bca1cdee4ca3fecbf261d3f759ffc8a8d8 
	I0918 19:39:00.710356    8317 kubeadm.go:310] W0918 19:38:44.934790    1831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:39:00.710662    8317 kubeadm.go:310] W0918 19:38:44.935805    1831 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0918 19:39:00.710885    8317 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0918 19:39:00.710994    8317 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 19:39:00.711013    8317 cni.go:84] Creating CNI manager for ""
	I0918 19:39:00.711031    8317 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:39:00.713910    8317 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0918 19:39:00.716824    8317 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0918 19:39:00.725846    8317 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0918 19:39:00.746943    8317 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 19:39:00.747072    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:00.747165    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-923322 minikube.k8s.io/updated_at=2024_09_18T19_39_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91 minikube.k8s.io/name=addons-923322 minikube.k8s.io/primary=true
	I0918 19:39:00.985496    8317 ops.go:34] apiserver oom_adj: -16
	I0918 19:39:00.985647    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:01.485785    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:01.986567    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:02.485797    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:02.985997    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:03.486321    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:03.985991    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:04.485879    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:04.985809    8317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:39:05.104949    8317 kubeadm.go:1113] duration metric: took 4.357920781s to wait for elevateKubeSystemPrivileges
	I0918 19:39:05.104983    8317 kubeadm.go:394] duration metric: took 20.323211845s to StartCluster
	I0918 19:39:05.105004    8317 settings.go:142] acquiring lock: {Name:mka60e55fdc2e0389e1fbfa23792ee022689e7b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:05.105150    8317 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-2236/kubeconfig
	I0918 19:39:05.105558    8317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/kubeconfig: {Name:mk8ee68a7fcf0033412d5c9abf2a4743eba0e82f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:39:05.105767    8317 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 19:39:05.105894    8317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 19:39:05.106160    8317 config.go:182] Loaded profile config "addons-923322": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:39:05.106214    8317 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0918 19:39:05.106303    8317 addons.go:69] Setting yakd=true in profile "addons-923322"
	I0918 19:39:05.106320    8317 addons.go:234] Setting addon yakd=true in "addons-923322"
	I0918 19:39:05.106345    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.106837    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.107305    8317 addons.go:69] Setting metrics-server=true in profile "addons-923322"
	I0918 19:39:05.107333    8317 addons.go:234] Setting addon metrics-server=true in "addons-923322"
	I0918 19:39:05.107370    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.107831    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.112005    8317 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-923322"
	I0918 19:39:05.112097    8317 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-923322"
	I0918 19:39:05.112173    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.112293    8317 addons.go:69] Setting cloud-spanner=true in profile "addons-923322"
	I0918 19:39:05.112404    8317 addons.go:234] Setting addon cloud-spanner=true in "addons-923322"
	I0918 19:39:05.112450    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.112638    8317 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-923322"
	I0918 19:39:05.112676    8317 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-923322"
	I0918 19:39:05.112696    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.113147    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.115694    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.119761    8317 addons.go:69] Setting default-storageclass=true in profile "addons-923322"
	I0918 19:39:05.119856    8317 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-923322"
	I0918 19:39:05.120258    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.123452    8317 addons.go:69] Setting registry=true in profile "addons-923322"
	I0918 19:39:05.123534    8317 addons.go:234] Setting addon registry=true in "addons-923322"
	I0918 19:39:05.123612    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.124136    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.124666    8317 addons.go:69] Setting gcp-auth=true in profile "addons-923322"
	I0918 19:39:05.124711    8317 mustload.go:65] Loading cluster: addons-923322
	I0918 19:39:05.124891    8317 config.go:182] Loaded profile config "addons-923322": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:39:05.125133    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.135100    8317 addons.go:69] Setting storage-provisioner=true in profile "addons-923322"
	I0918 19:39:05.135230    8317 addons.go:234] Setting addon storage-provisioner=true in "addons-923322"
	I0918 19:39:05.135325    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.136041    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.137209    8317 addons.go:69] Setting ingress=true in profile "addons-923322"
	I0918 19:39:05.137242    8317 addons.go:234] Setting addon ingress=true in "addons-923322"
	I0918 19:39:05.137289    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.137758    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.159017    8317 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-923322"
	I0918 19:39:05.159054    8317 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-923322"
	I0918 19:39:05.159330    8317 addons.go:69] Setting ingress-dns=true in profile "addons-923322"
	I0918 19:39:05.159412    8317 addons.go:234] Setting addon ingress-dns=true in "addons-923322"
	I0918 19:39:05.159486    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.160228    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.160695    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.175319    8317 addons.go:69] Setting volcano=true in profile "addons-923322"
	I0918 19:39:05.175355    8317 addons.go:234] Setting addon volcano=true in "addons-923322"
	I0918 19:39:05.175400    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.175910    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.176135    8317 addons.go:69] Setting inspektor-gadget=true in profile "addons-923322"
	I0918 19:39:05.176163    8317 addons.go:234] Setting addon inspektor-gadget=true in "addons-923322"
	I0918 19:39:05.176198    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.176643    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.188439    8317 out.go:177] * Verifying Kubernetes components...
	I0918 19:39:05.203989    8317 addons.go:69] Setting volumesnapshots=true in profile "addons-923322"
	I0918 19:39:05.204026    8317 addons.go:234] Setting addon volumesnapshots=true in "addons-923322"
	I0918 19:39:05.204064    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.204566    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.265990    8317 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0918 19:39:05.269671    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 19:39:05.269828    8317 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0918 19:39:05.269856    8317 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0918 19:39:05.269959    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.283712    8317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:39:05.289568    8317 addons.go:234] Setting addon default-storageclass=true in "addons-923322"
	I0918 19:39:05.289665    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.290132    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.292418    8317 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0918 19:39:05.292907    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.313992    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.318404    8317 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0918 19:39:05.318535    8317 out.go:177]   - Using image docker.io/registry:2.8.3
	I0918 19:39:05.318785    8317 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 19:39:05.318798    8317 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 19:39:05.318862    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.334197    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 19:39:05.353890    8317 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:05.353976    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0918 19:39:05.367644    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.355084    8317 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-923322"
	I0918 19:39:05.374631    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:05.377626    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:05.384517    8317 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0918 19:39:05.388916    8317 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0918 19:39:05.388975    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0918 19:39:05.389063    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.406811    8317 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0918 19:39:05.406979    8317 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0918 19:39:05.407066    8317 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0918 19:39:05.407282    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 19:39:05.433428    8317 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 19:39:05.433514    8317 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 19:39:05.433625    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.436923    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 19:39:05.441303    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 19:39:05.447776    8317 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:05.447798    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0918 19:39:05.447864    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.451453    8317 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0918 19:39:05.455654    8317 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0918 19:39:05.463298    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 19:39:05.464733    8317 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 19:39:05.464764    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0918 19:39:05.464834    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.479522    8317 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:39:05.479763    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 19:39:05.487198    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 19:39:05.487456    8317 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:05.487489    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 19:39:05.487582    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.492512    8317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 19:39:05.492541    8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 19:39:05.492624    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.517317    8317 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 19:39:05.520087    8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 19:39:05.520114    8317 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 19:39:05.520184    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.530045    8317 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0918 19:39:05.535674    8317 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:05.535767    8317 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 19:39:05.535844    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.546103    8317 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:05.546125    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0918 19:39:05.546188    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.549600    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:05.551631    8317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:05.561914    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:05.571301    8317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:05.575766    8317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0918 19:39:05.579356    8317 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:05.579377    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0918 19:39:05.579440    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.630294    8317 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0918 19:39:05.633693    8317 out.go:177]   - Using image docker.io/busybox:stable
	I0918 19:39:05.636807    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:05.637638    8317 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:05.637658    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0918 19:39:05.637719    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:05.651508    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:05.686815    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:05.713424    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:05.715661    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:05.719625    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:05.763839    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:05.764245    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:05.777174    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:05.778594    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	W0918 19:39:05.780528    8317 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0918 19:39:05.780563    8317 retry.go:31] will retry after 310.121584ms: ssh: handshake failed: EOF
	I0918 19:39:05.799001    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	W0918 19:39:05.800227    8317 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0918 19:39:05.800252    8317 retry.go:31] will retry after 155.211495ms: ssh: handshake failed: EOF
	I0918 19:39:05.808792    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:06.098336    8317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 19:39:06.098491    8317 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 19:39:06.630986    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0918 19:39:06.802995    8317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 19:39:06.803069    8317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 19:39:06.817310    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:39:06.837823    8317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 19:39:06.837892    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 19:39:06.878694    8317 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0918 19:39:06.878772    8317 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0918 19:39:06.885476    8317 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0918 19:39:06.885545    8317 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0918 19:39:06.934218    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0918 19:39:06.951456    8317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 19:39:06.951533    8317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 19:39:06.954287    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0918 19:39:06.978575    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0918 19:39:06.995336    8317 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 19:39:06.995413    8317 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 19:39:07.013465    8317 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 19:39:07.013487    8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 19:39:07.033888    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 19:39:07.037932    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 19:39:07.120426    8317 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0918 19:39:07.120501    8317 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0918 19:39:07.160113    8317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 19:39:07.160186    8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 19:39:07.165365    8317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 19:39:07.165440    8317 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 19:39:07.193778    8317 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:07.193855    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0918 19:39:07.204233    8317 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 19:39:07.204296    8317 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 19:39:07.211963    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 19:39:07.220172    8317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 19:39:07.220247    8317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 19:39:07.352664    8317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:07.352751    8317 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 19:39:07.424821    8317 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 19:39:07.424903    8317 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 19:39:07.438775    8317 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0918 19:39:07.438842    8317 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0918 19:39:07.531673    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0918 19:39:07.534812    8317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 19:39:07.534888    8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 19:39:07.572971    8317 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 19:39:07.573060    8317 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 19:39:07.703628    8317 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:07.703708    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0918 19:39:07.710069    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 19:39:07.738049    8317 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 19:39:07.738134    8317 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 19:39:07.767125    8317 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 19:39:07.767197    8317 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 19:39:07.929895    8317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 19:39:07.929977    8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 19:39:08.013762    8317 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:08.013834    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 19:39:08.171181    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:08.187935    8317 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 19:39:08.188014    8317 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 19:39:08.190795    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0918 19:39:08.331124    8317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 19:39:08.331200    8317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 19:39:08.341548    8317 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.243026473s)
	I0918 19:39:08.341717    8317 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.243346737s)
	I0918 19:39:08.341769    8317 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0918 19:39:08.343311    8317 node_ready.go:35] waiting up to 6m0s for node "addons-923322" to be "Ready" ...
	I0918 19:39:08.347235    8317 node_ready.go:49] node "addons-923322" has status "Ready":"True"
	I0918 19:39:08.347318    8317 node_ready.go:38] duration metric: took 3.97799ms for node "addons-923322" to be "Ready" ...
	I0918 19:39:08.347376    8317 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:39:08.370085    8317 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:08.487975    8317 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:08.488046    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0918 19:39:08.653242    8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 19:39:08.653314    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 19:39:08.729990    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 19:39:08.846855    8317 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-923322" context rescaled to 1 replicas
	I0918 19:39:09.098320    8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 19:39:09.098347    8317 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 19:39:09.151426    8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 19:39:09.151450    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 19:39:09.177027    8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 19:39:09.177052    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 19:39:09.200049    8317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:09.200076    8317 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 19:39:09.222555    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 19:39:10.418929    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:12.325387    8317 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 19:39:12.325541    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:12.354091    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:12.881123    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:13.536360    8317 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 19:39:13.923755    8317 addons.go:234] Setting addon gcp-auth=true in "addons-923322"
	I0918 19:39:13.923860    8317 host.go:66] Checking if "addons-923322" exists ...
	I0918 19:39:13.924427    8317 cli_runner.go:164] Run: docker container inspect addons-923322 --format={{.State.Status}}
	I0918 19:39:13.958533    8317 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 19:39:13.958589    8317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-923322
	I0918 19:39:13.985386    8317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/addons-923322/id_rsa Username:docker}
	I0918 19:39:15.376550    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:17.537058    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:18.386750    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.75567805s)
	I0918 19:39:18.386909    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.569527313s)
	I0918 19:39:18.386986    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.452673423s)
	I0918 19:39:18.387068    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.432698229s)
	I0918 19:39:18.387319    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (11.40867665s)
	I0918 19:39:18.387477    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.353569674s)
	I0918 19:39:18.387616    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.349663642s)
	I0918 19:39:18.387646    8317 addons.go:475] Verifying addon ingress=true in "addons-923322"
	I0918 19:39:18.387882    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.175845839s)
	I0918 19:39:18.388117    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.856372369s)
	I0918 19:39:18.388134    8317 addons.go:475] Verifying addon registry=true in "addons-923322"
	I0918 19:39:18.388405    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.678249913s)
	I0918 19:39:18.388542    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.217256672s)
	W0918 19:39:18.388597    8317 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:18.388856    8317 retry.go:31] will retry after 354.904914ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 19:39:18.388921    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.198053001s)
	I0918 19:39:18.388442    8317 addons.go:475] Verifying addon metrics-server=true in "addons-923322"
	I0918 19:39:18.389133    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.659111267s)
	I0918 19:39:18.391143    8317 out.go:177] * Verifying registry addon...
	I0918 19:39:18.391219    8317 out.go:177] * Verifying ingress addon...
	I0918 19:39:18.394259    8317 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-923322 service yakd-dashboard -n yakd-dashboard
	
	I0918 19:39:18.395189    8317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 19:39:18.396120    8317 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 19:39:18.435793    8317 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 19:39:18.435821    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:18.436874    8317 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 19:39:18.436893    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0918 19:39:18.489412    8317 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0918 19:39:18.744418    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 19:39:18.929726    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:18.930399    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:19.097383    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.874770223s)
	I0918 19:39:19.097420    8317 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-923322"
	I0918 19:39:19.097671    8317 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.139114696s)
	I0918 19:39:19.100648    8317 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 19:39:19.100771    8317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0918 19:39:19.103644    8317 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0918 19:39:19.104527    8317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 19:39:19.112005    8317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 19:39:19.112036    8317 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 19:39:19.113912    8317 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 19:39:19.113940    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:19.241888    8317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 19:39:19.241921    8317 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 19:39:19.334561    8317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:19.334592    8317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0918 19:39:19.403340    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:19.404658    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:19.428569    8317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 19:39:19.609500    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:19.885665    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:19.930291    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:19.931003    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:20.119330    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:20.402008    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:20.402643    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:20.610139    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:20.903770    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:20.908468    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:21.071305    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.326834314s)
	I0918 19:39:21.080442    8317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.651780197s)
	I0918 19:39:21.087579    8317 addons.go:475] Verifying addon gcp-auth=true in "addons-923322"
	I0918 19:39:21.092179    8317 out.go:177] * Verifying gcp-auth addon...
	I0918 19:39:21.095765    8317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 19:39:21.099126    8317 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 19:39:21.110355    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:21.400986    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:21.401966    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:21.609949    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:21.902986    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:21.905021    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:22.109713    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:22.377076    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:22.401027    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:22.402524    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:22.611321    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:22.903185    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:22.903608    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:23.109607    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:23.403771    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:23.404671    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:23.609320    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:23.900124    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:23.902262    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:24.110052    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:24.377227    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:24.399374    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:24.401220    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:24.610327    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:24.901954    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:24.903676    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:25.110819    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:25.401695    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:25.403489    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:25.610418    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:25.899581    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:25.902598    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:26.109730    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:26.378023    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:26.401209    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:26.401997    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:26.609774    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:26.900291    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:26.901951    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:27.109382    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:27.400932    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:27.401417    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:27.611153    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:27.900909    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:27.904582    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:28.109858    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:28.400941    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:28.402832    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:28.610615    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:28.876668    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:28.902115    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:28.903574    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:29.109165    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:29.400066    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:29.401534    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:29.609273    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:29.902428    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:29.903903    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:30.141629    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:30.401846    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:30.402424    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:30.609302    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:30.877747    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:30.902748    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:30.904408    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:31.110642    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:31.403179    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:31.404217    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:31.610435    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:31.898819    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:31.906072    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:32.110404    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:32.400411    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:32.400903    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:32.609358    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:32.900013    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:32.901264    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:33.110437    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:33.376912    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:33.402460    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:33.402783    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:33.609819    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:33.901256    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:33.901778    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:34.109189    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:34.415676    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:34.416810    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:34.613125    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:34.903123    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:34.904123    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:35.111704    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:35.377759    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:35.399778    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:35.404667    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:35.611389    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:35.901682    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:35.902524    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:36.109635    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:36.400046    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:36.401012    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:36.609655    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:36.902573    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:36.903767    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:37.111300    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:37.401370    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:37.403518    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:37.608925    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:37.877572    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:37.903056    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:37.904465    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:38.110452    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:38.399766    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:38.401677    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:38.610155    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:38.902260    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:38.903464    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:39.110404    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:39.400393    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:39.402711    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:39.609999    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:39.877937    8317 pod_ready.go:103] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"False"
	I0918 19:39:39.915913    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:39.917261    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:40.118288    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:40.378198    8317 pod_ready.go:93] pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:40.378224    8317 pod_ready.go:82] duration metric: took 32.008047555s for pod "coredns-7c65d6cfc9-2g4l7" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.378236    8317 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xsvnk" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.380524    8317 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-xsvnk" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xsvnk" not found
	I0918 19:39:40.380621    8317 pod_ready.go:82] duration metric: took 2.370461ms for pod "coredns-7c65d6cfc9-xsvnk" in "kube-system" namespace to be "Ready" ...
	E0918 19:39:40.380650    8317 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-xsvnk" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-xsvnk" not found
	I0918 19:39:40.380684    8317 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-923322" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.386469    8317 pod_ready.go:93] pod "etcd-addons-923322" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:40.386537    8317 pod_ready.go:82] duration metric: took 5.823217ms for pod "etcd-addons-923322" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.386564    8317 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-923322" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.392934    8317 pod_ready.go:93] pod "kube-apiserver-addons-923322" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:40.393012    8317 pod_ready.go:82] duration metric: took 6.425166ms for pod "kube-apiserver-addons-923322" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.393038    8317 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-923322" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.403793    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:40.404138    8317 pod_ready.go:93] pod "kube-controller-manager-addons-923322" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:40.404150    8317 pod_ready.go:82] duration metric: took 11.089651ms for pod "kube-controller-manager-addons-923322" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.404161    8317 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-c2h5g" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.406597    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:40.574787    8317 pod_ready.go:93] pod "kube-proxy-c2h5g" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:40.574815    8317 pod_ready.go:82] duration metric: took 170.646635ms for pod "kube-proxy-c2h5g" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.574827    8317 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-923322" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.609679    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:40.903224    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:40.904549    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:40.975183    8317 pod_ready.go:93] pod "kube-scheduler-addons-923322" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:40.975210    8317 pod_ready.go:82] duration metric: took 400.375731ms for pod "kube-scheduler-addons-923322" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:40.975223    8317 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-cddcv" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:41.113674    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:41.374475    8317 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-cddcv" in "kube-system" namespace has status "Ready":"True"
	I0918 19:39:41.374558    8317 pod_ready.go:82] duration metric: took 399.325225ms for pod "nvidia-device-plugin-daemonset-cddcv" in "kube-system" namespace to be "Ready" ...
	I0918 19:39:41.374584    8317 pod_ready.go:39] duration metric: took 33.02716277s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:39:41.374632    8317 api_server.go:52] waiting for apiserver process to appear ...
	I0918 19:39:41.374723    8317 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:39:41.392894    8317 api_server.go:72] duration metric: took 36.287089728s to wait for apiserver process to appear ...
	I0918 19:39:41.392921    8317 api_server.go:88] waiting for apiserver healthz status ...
	I0918 19:39:41.392943    8317 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0918 19:39:41.401502    8317 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0918 19:39:41.403309    8317 api_server.go:141] control plane version: v1.31.1
	I0918 19:39:41.403352    8317 api_server.go:131] duration metric: took 10.424121ms to wait for apiserver health ...
	I0918 19:39:41.403362    8317 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 19:39:41.405130    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 19:39:41.407782    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:41.581317    8317 system_pods.go:59] 17 kube-system pods found
	I0918 19:39:41.581356    8317 system_pods.go:61] "coredns-7c65d6cfc9-2g4l7" [c4764c8a-196f-4d05-87d9-0c7d78489b01] Running
	I0918 19:39:41.581365    8317 system_pods.go:61] "csi-hostpath-attacher-0" [097868b0-2207-40a0-8638-29d43c76956f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:39:41.581374    8317 system_pods.go:61] "csi-hostpath-resizer-0" [27f2f88c-98ce-450b-9dd4-39098fa9d3c0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:39:41.581382    8317 system_pods.go:61] "csi-hostpathplugin-qg252" [c24860db-28aa-4eca-aa5e-a23c98d972b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:39:41.581387    8317 system_pods.go:61] "etcd-addons-923322" [16737166-da6e-4bdb-9dc7-f99f689862bd] Running
	I0918 19:39:41.581393    8317 system_pods.go:61] "kube-apiserver-addons-923322" [b9c7d74b-a9a4-442a-a79f-cb524b0620fa] Running
	I0918 19:39:41.581398    8317 system_pods.go:61] "kube-controller-manager-addons-923322" [0b1c1ad7-9dec-4a2f-8169-ed1ee5b84119] Running
	I0918 19:39:41.581406    8317 system_pods.go:61] "kube-ingress-dns-minikube" [22538dc0-3ac3-4849-83e9-9fc02c69f1d9] Running
	I0918 19:39:41.581413    8317 system_pods.go:61] "kube-proxy-c2h5g" [ec2420ba-b77d-4ef0-849d-aad464f1ef73] Running
	I0918 19:39:41.581420    8317 system_pods.go:61] "kube-scheduler-addons-923322" [74a64aa9-7aa5-4dea-b57e-a60b25beb834] Running
	I0918 19:39:41.581426    8317 system_pods.go:61] "metrics-server-84c5f94fbc-hwphq" [b9ffea56-bc3b-4b0e-b302-9726b4125780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:39:41.581430    8317 system_pods.go:61] "nvidia-device-plugin-daemonset-cddcv" [b574c98b-2a15-4629-9c56-0509a4565cf5] Running
	I0918 19:39:41.581441    8317 system_pods.go:61] "registry-66c9cd494c-m9pdd" [be6aeece-e555-4628-88de-f374e1e78aa3] Running
	I0918 19:39:41.581447    8317 system_pods.go:61] "registry-proxy-rxskq" [e2a2228e-559d-447a-953c-77300e373ad5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 19:39:41.581453    8317 system_pods.go:61] "snapshot-controller-56fcc65765-lwgp4" [db3a36fd-16b8-42f2-9ce8-efd2efdbc731] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:41.581465    8317 system_pods.go:61] "snapshot-controller-56fcc65765-vp9xg" [4fb8cfa7-1048-4342-b3ef-7f8597d3541e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:41.581471    8317 system_pods.go:61] "storage-provisioner" [4a0413f0-5a79-47ac-856d-06ca4c5730d5] Running
	I0918 19:39:41.581480    8317 system_pods.go:74] duration metric: took 178.112066ms to wait for pod list to return data ...
	I0918 19:39:41.581487    8317 default_sa.go:34] waiting for default service account to be created ...
	I0918 19:39:41.609613    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:41.778931    8317 default_sa.go:45] found service account: "default"
	I0918 19:39:41.778966    8317 default_sa.go:55] duration metric: took 197.472023ms for default service account to be created ...
	I0918 19:39:41.778975    8317 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 19:39:41.900716    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:41.901547    8317 kapi.go:107] duration metric: took 23.50635637s to wait for kubernetes.io/minikube-addons=registry ...
	I0918 19:39:41.980877    8317 system_pods.go:86] 17 kube-system pods found
	I0918 19:39:41.980912    8317 system_pods.go:89] "coredns-7c65d6cfc9-2g4l7" [c4764c8a-196f-4d05-87d9-0c7d78489b01] Running
	I0918 19:39:41.980923    8317 system_pods.go:89] "csi-hostpath-attacher-0" [097868b0-2207-40a0-8638-29d43c76956f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0918 19:39:41.980931    8317 system_pods.go:89] "csi-hostpath-resizer-0" [27f2f88c-98ce-450b-9dd4-39098fa9d3c0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 19:39:41.980940    8317 system_pods.go:89] "csi-hostpathplugin-qg252" [c24860db-28aa-4eca-aa5e-a23c98d972b7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 19:39:41.980944    8317 system_pods.go:89] "etcd-addons-923322" [16737166-da6e-4bdb-9dc7-f99f689862bd] Running
	I0918 19:39:41.980949    8317 system_pods.go:89] "kube-apiserver-addons-923322" [b9c7d74b-a9a4-442a-a79f-cb524b0620fa] Running
	I0918 19:39:41.980954    8317 system_pods.go:89] "kube-controller-manager-addons-923322" [0b1c1ad7-9dec-4a2f-8169-ed1ee5b84119] Running
	I0918 19:39:41.980960    8317 system_pods.go:89] "kube-ingress-dns-minikube" [22538dc0-3ac3-4849-83e9-9fc02c69f1d9] Running
	I0918 19:39:41.980965    8317 system_pods.go:89] "kube-proxy-c2h5g" [ec2420ba-b77d-4ef0-849d-aad464f1ef73] Running
	I0918 19:39:41.980969    8317 system_pods.go:89] "kube-scheduler-addons-923322" [74a64aa9-7aa5-4dea-b57e-a60b25beb834] Running
	I0918 19:39:41.980979    8317 system_pods.go:89] "metrics-server-84c5f94fbc-hwphq" [b9ffea56-bc3b-4b0e-b302-9726b4125780] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0918 19:39:41.980993    8317 system_pods.go:89] "nvidia-device-plugin-daemonset-cddcv" [b574c98b-2a15-4629-9c56-0509a4565cf5] Running
	I0918 19:39:41.980998    8317 system_pods.go:89] "registry-66c9cd494c-m9pdd" [be6aeece-e555-4628-88de-f374e1e78aa3] Running
	I0918 19:39:41.981002    8317 system_pods.go:89] "registry-proxy-rxskq" [e2a2228e-559d-447a-953c-77300e373ad5] Running
	I0918 19:39:41.981009    8317 system_pods.go:89] "snapshot-controller-56fcc65765-lwgp4" [db3a36fd-16b8-42f2-9ce8-efd2efdbc731] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:41.981019    8317 system_pods.go:89] "snapshot-controller-56fcc65765-vp9xg" [4fb8cfa7-1048-4342-b3ef-7f8597d3541e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 19:39:41.981023    8317 system_pods.go:89] "storage-provisioner" [4a0413f0-5a79-47ac-856d-06ca4c5730d5] Running
	I0918 19:39:41.981031    8317 system_pods.go:126] duration metric: took 202.049608ms to wait for k8s-apps to be running ...
	I0918 19:39:41.981037    8317 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 19:39:41.981095    8317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:39:41.994510    8317 system_svc.go:56] duration metric: took 13.461902ms WaitForService to wait for kubelet
	I0918 19:39:41.994536    8317 kubeadm.go:582] duration metric: took 36.888737118s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:39:41.994556    8317 node_conditions.go:102] verifying NodePressure condition ...
	I0918 19:39:42.111134    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:42.175904    8317 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0918 19:39:42.175954    8317 node_conditions.go:123] node cpu capacity is 2
	I0918 19:39:42.175970    8317 node_conditions.go:105] duration metric: took 181.407561ms to run NodePressure ...
	I0918 19:39:42.175983    8317 start.go:241] waiting for startup goroutines ...
	I0918 19:39:42.400907    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:42.609627    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:42.901059    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:43.119661    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:43.401225    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:43.609945    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:43.905504    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:44.110752    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:44.403916    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:44.615652    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:44.902567    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:45.153296    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:45.401721    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:45.610788    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:45.906106    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:46.123589    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:46.401975    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:46.609947    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:46.906066    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.110039    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.401038    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:47.609505    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:47.903111    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.110208    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.402300    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:48.609667    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:48.902681    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.110204    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.408589    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:49.611159    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:49.900511    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:50.110931    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.401451    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:50.610771    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:50.901486    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.110273    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.400617    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:51.609409    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:51.900253    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.202002    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:52.400576    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:52.609524    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:52.901235    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.109976    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.401131    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:53.609369    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:53.900158    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:54.110993    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.400539    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:54.609992    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:54.901085    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.110167    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.404291    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:55.609804    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:55.901079    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.109992    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.400943    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:56.609215    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:56.901109    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.110218    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.401467    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:57.609087    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:57.900543    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.110293    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.485756    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:58.609971    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:58.900633    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:59.109659    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.401107    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:39:59.611630    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:39:59.902025    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.170357    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.453660    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:00.624565    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:00.902762    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:01.110056    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.405340    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:01.609774    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:01.902424    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.109292    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.400709    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:02.609556    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:02.902340    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.111318    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.451442    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:03.610133    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:03.901048    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.109237    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.402237    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:04.610260    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:04.902171    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.110779    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.401704    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:05.612850    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:05.915456    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:06.110204    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.401949    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:06.610231    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:06.901831    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.110697    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.402138    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:07.609142    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:07.900674    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.109759    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.402011    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:08.610401    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:08.901813    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:09.109429    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.402698    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:09.611364    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:09.902125    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.110428    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.401992    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:10.610722    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:10.901863    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.110147    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.401672    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:11.609905    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:11.902241    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.109957    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.402242    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:12.610167    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:12.901597    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.110136    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.402152    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:13.609280    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:13.900259    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.110017    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 19:40:14.402182    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:14.609694    8317 kapi.go:107] duration metric: took 55.505161147s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 19:40:14.900937    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.401211    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:15.900851    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.401502    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:16.900568    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.400614    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:17.900545    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.400391    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:18.901469    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.401582    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:19.901843    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.401357    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:20.902217    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:21.401325    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:21.901567    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.401988    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:22.901080    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:23.401348    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:23.900662    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.401319    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:24.901174    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:25.402354    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:25.900828    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:26.413286    8317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 19:40:26.904812    8317 kapi.go:107] duration metric: took 1m8.508691175s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0918 19:40:43.099690    8317 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 19:40:43.099715    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:43.599198    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:44.100644    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:44.599733    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:45.101395    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:45.599106    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:46.107043    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:46.600031    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:47.099469    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:47.599834    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:48.100203    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:48.599049    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:49.099826    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:49.599132    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:50.100197    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:50.599871    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:51.100458    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:51.600262    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:52.101158    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:52.600190    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:53.099464    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:53.600181    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:54.099497    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:54.599871    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:55.106558    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:55.600370    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:56.100059    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:56.599590    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:57.099532    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:57.599907    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:58.100204    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:58.600184    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:59.100658    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:40:59.601171    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:00.215555    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:00.600097    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:01.103583    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:01.599505    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:02.099519    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:02.599551    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:03.099581    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:03.598998    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:04.099991    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:04.599479    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:05.099343    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:05.599482    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:06.100210    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:06.599206    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:07.099466    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:07.598861    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:08.099849    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:08.599466    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:09.099772    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:09.600589    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:10.101076    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:10.600112    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:11.100216    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:11.603590    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:12.100021    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:12.600178    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:13.099436    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:13.599971    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:14.100331    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:14.599493    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:15.100201    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:15.599890    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:16.100075    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:16.600025    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:17.099332    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:17.599214    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:18.100207    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:18.600556    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:19.099990    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:19.600780    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:20.101519    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:20.599853    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:21.099554    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:21.599057    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:22.099665    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:22.599287    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:23.100744    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:23.599294    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:24.100298    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:24.600717    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:25.100494    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:25.599759    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:26.099805    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:26.599904    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:27.100651    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:27.599493    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:28.100638    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:28.599825    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:29.100660    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:29.600240    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:30.121519    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:30.598855    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:31.100427    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:31.599925    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:32.100133    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:32.599544    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:33.099608    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:33.599912    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:34.100654    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:34.599488    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:35.099657    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:35.599343    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:36.100223    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:36.599531    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:37.099300    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:37.598886    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:38.100716    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:38.598958    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:39.100228    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:39.600424    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:40.099565    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:40.599234    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:41.100222    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:41.599537    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:42.102700    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:42.600335    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:43.099001    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:43.600178    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:44.100834    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:44.599985    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:45.167317    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:45.599053    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:46.100909    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:46.600009    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:47.099800    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:47.603403    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:48.100619    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:48.599091    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:49.100316    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:49.599852    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:50.100671    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:50.599203    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:51.101249    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:51.599970    8317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 19:41:52.100439    8317 kapi.go:107] duration metric: took 2m31.004668501s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 19:41:52.103619    8317 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-923322 cluster.
	I0918 19:41:52.106714    8317 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 19:41:52.109335    8317 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 19:41:52.112085    8317 out.go:177] * Enabled addons: volcano, storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0918 19:41:52.114866    8317 addons.go:510] duration metric: took 2m47.008652771s for enable addons: enabled=[volcano storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0918 19:41:52.114961    8317 start.go:246] waiting for cluster config update ...
	I0918 19:41:52.114998    8317 start.go:255] writing updated cluster config ...
	I0918 19:41:52.115410    8317 ssh_runner.go:195] Run: rm -f paused
	I0918 19:41:52.518642    8317 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0918 19:41:52.521861    8317 out.go:177] * Done! kubectl is now configured to use "addons-923322" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 18 19:51:12 addons-923322 dockerd[1288]: time="2024-09-18T19:51:12.643369896Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 18 19:51:12 addons-923322 dockerd[1288]: time="2024-09-18T19:51:12.646245158Z" level=error msg="stream copy error: reading from a closed fifo"
	Sep 18 19:51:12 addons-923322 dockerd[1288]: time="2024-09-18T19:51:12.648640924Z" level=error msg="Error running exec 11f613f847480c4ffd79f53b0abf9ba46c1d7bfb0d37641442af92526929c535 in container: OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 18 19:51:12 addons-923322 dockerd[1288]: time="2024-09-18T19:51:12.824612519Z" level=info msg="ignoring event" container=575c849999fe389738d2ad410b2ceed3350a6ef9a68d95d6136d8005a0a856c9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:18 addons-923322 dockerd[1288]: time="2024-09-18T19:51:18.194078456Z" level=info msg="ignoring event" container=ba8d0457deb684d9132d822104f7a376fa88b3228861c713a3ded3dfe618bedc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:18 addons-923322 dockerd[1288]: time="2024-09-18T19:51:18.194136968Z" level=info msg="ignoring event" container=cfd05a18613995b4270d324e3f50a7ef53adcc8a4d5e1fdd998564b6514fbb00 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:18 addons-923322 dockerd[1288]: time="2024-09-18T19:51:18.360038429Z" level=info msg="ignoring event" container=fc5ed7784b0174cb45fff1b9ad47785fae4d3053c13e2e389caf827787bc1e0f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:18 addons-923322 dockerd[1288]: time="2024-09-18T19:51:18.415908283Z" level=info msg="ignoring event" container=adb85c1097ec0b8d0f96db0328d74d56d3ab9646a094ed65404aab2f604d6421 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:26 addons-923322 dockerd[1288]: time="2024-09-18T19:51:26.060243390Z" level=info msg="ignoring event" container=59da5a4e8be5557cf24a0006a7e5d7ccde4463733bd446b46a4acc7e03cfb324 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:26 addons-923322 dockerd[1288]: time="2024-09-18T19:51:26.209050933Z" level=info msg="ignoring event" container=839bb18337cf5c9669be45261274704ec640f36983e22a7425aa73fb3ce79bc1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:31 addons-923322 dockerd[1288]: time="2024-09-18T19:51:31.738649882Z" level=info msg="ignoring event" container=345affbde6eca46c178218ee9cc0964ae9988f14488e0bf1b4268d1d34de1954 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:32 addons-923322 dockerd[1288]: time="2024-09-18T19:51:32.289973533Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 18 19:51:32 addons-923322 dockerd[1288]: time="2024-09-18T19:51:32.293032051Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 18 19:51:38 addons-923322 cri-dockerd[1546]: time="2024-09-18T19:51:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2007afa69f16fefe6d27d45d25d8677ca8b2554704dc5c7a054b1cff499b250c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 18 19:51:39 addons-923322 cri-dockerd[1546]: time="2024-09-18T19:51:39Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 18 19:51:47 addons-923322 dockerd[1288]: time="2024-09-18T19:51:47.807922131Z" level=info msg="ignoring event" container=b8a861ae470ce7b01b9ec00242e1d1cea20128e8ba47d33fc28284b7af1a47c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:48 addons-923322 cri-dockerd[1546]: time="2024-09-18T19:51:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5df8d2f53285da6caa5a7b0279936e51b51279a30ed3641307b8b2b18ecaf55/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 18 19:51:49 addons-923322 dockerd[1288]: time="2024-09-18T19:51:49.058218583Z" level=info msg="ignoring event" container=2f5e92316cafdab025dfa1c5f164e8e01cca4bd2a706c10581755d47ad92b385 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:49 addons-923322 dockerd[1288]: time="2024-09-18T19:51:49.254809257Z" level=info msg="ignoring event" container=371b94d41c801840c3dd27d8e6226905087b8f9c9b99cbaff78ce754c5db6c64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:49 addons-923322 dockerd[1288]: time="2024-09-18T19:51:49.402817844Z" level=info msg="ignoring event" container=b8e3df567fef6accffb45cc730463edba7764a51d630d4eb60d02bfc88e0ab1f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:49 addons-923322 cri-dockerd[1546]: time="2024-09-18T19:51:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-rxskq_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 18 19:51:49 addons-923322 dockerd[1288]: time="2024-09-18T19:51:49.617325171Z" level=info msg="ignoring event" container=5427eb651ef2608d609fbff640d1e252a6aad2469944d2268e70942a35bb989b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:49 addons-923322 dockerd[1288]: time="2024-09-18T19:51:49.937992994Z" level=info msg="ignoring event" container=16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:50 addons-923322 dockerd[1288]: time="2024-09-18T19:51:50.093060536Z" level=info msg="ignoring event" container=78ffd0289fdbdf49914987bfd884db1c89ae8eb2f708212edbfce466d9e3b21c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 18 19:51:50 addons-923322 cri-dockerd[1546]: time="2024-09-18T19:51:50Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                       ATTEMPT             POD ID              POD
	7e3d43634e0bd       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  Less than a second ago   Running             hello-world-app            0                   e5df8d2f53285       hello-world-app-55bf9c44b4-lzqv9
	3e8ffad627f1d       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                11 seconds ago           Running             nginx                      0                   2007afa69f16f       nginx
	e19eb0bfe3034       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago           Running             gcp-auth                   0                   e40f893f157fe       gcp-auth-89d5ffd79-x4mf2
	79ea640b27b01       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago           Running             controller                 0                   5fbb3c62ef5c0       ingress-nginx-controller-bc57996ff-85r62
	0fa2755032e59       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago           Exited              patch                      0                   34ecb094a8359       ingress-nginx-admission-patch-mfskz
	6eb4ee03c3b4e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago           Exited              create                     0                   c056ee282284e       ingress-nginx-admission-create-kggkx
	1111b9d74e51a       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        11 minutes ago           Running             yakd                       0                   57770d279a1d3       yakd-dashboard-67d98fc6b-4wvqd
	643a43d953b52       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago           Running             local-path-provisioner     0                   a47e0555a710f       local-path-provisioner-86d989889c-94sjr
	111bb68b4057b       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago           Running             cloud-spanner-emulator     0                   cb65a88ee01f0       cloud-spanner-emulator-769b77f747-pkc8f
	829714db4af63       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago           Running             nvidia-device-plugin-ctr   0                   99da0a499d1c5       nvidia-device-plugin-daemonset-cddcv
	fa048570e9486       ba04bb24b9575                                                                                                                12 minutes ago           Running             storage-provisioner        0                   c683a1ce5edda       storage-provisioner
	4bb2f7fc8d15f       2f6c962e7b831                                                                                                                12 minutes ago           Running             coredns                    0                   c698f497c9a95       coredns-7c65d6cfc9-2g4l7
	208ba88a814ba       24a140c548c07                                                                                                                12 minutes ago           Running             kube-proxy                 0                   02cffbceacd94       kube-proxy-c2h5g
	eb06e11940d5d       7f8aa378bb47d                                                                                                                12 minutes ago           Running             kube-scheduler             0                   f3a48bfa4509f       kube-scheduler-addons-923322
	4e05a51d5d389       d3f53a98c0a9d                                                                                                                12 minutes ago           Running             kube-apiserver             0                   a36bdadd9408c       kube-apiserver-addons-923322
	a9fec9e8cc3f5       279f381cb3736                                                                                                                12 minutes ago           Running             kube-controller-manager    0                   99566d6fa2aff       kube-controller-manager-addons-923322
	3fae247a18699       27e3830e14027                                                                                                                12 minutes ago           Running             etcd                       0                   d953109c1a7d2       etcd-addons-923322
	
	
	==> controller_ingress [79ea640b27b0] <==
	I0918 19:40:28.947311       6 controller.go:224] "Initial sync, sleeping for 1 second"
	I0918 19:40:28.947866       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-85r62", UID:"62a4f29e-5f4d-45fc-8a8e-b8987029ac81", APIVersion:"v1", ResourceVersion:"1252", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0918 19:51:37.236669       6 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0918 19:51:37.256979       6 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.02s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.02s testedConfigurationSize:18.1kB}
	I0918 19:51:37.257013       6 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0918 19:51:37.268209       6 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0918 19:51:37.268653       6 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0918 19:51:37.268760       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0918 19:51:37.271950       6 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"2827e562-8fe9-4e0d-8247-4a76a8cb788b", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2764", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0918 19:51:37.316647       6 controller.go:213] "Backend successfully reloaded"
	I0918 19:51:37.316990       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-85r62", UID:"62a4f29e-5f4d-45fc-8a8e-b8987029ac81", APIVersion:"v1", ResourceVersion:"1252", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0918 19:51:40.603150       6 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0918 19:51:40.604653       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0918 19:51:40.650618       6 controller.go:213] "Backend successfully reloaded"
	I0918 19:51:40.651208       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-85r62", UID:"62a4f29e-5f4d-45fc-8a8e-b8987029ac81", APIVersion:"v1", ResourceVersion:"1252", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0918 19:51:48.255417       6 controller.go:1110] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0918 19:51:48.280462       6 admission.go:149] processed ingress via admission controller {testedIngressLength:2 testedIngressTime:0.025s renderingIngressLength:2 renderingIngressTime:0s admissionTime:0.025s testedConfigurationSize:26.2kB}
	I0918 19:51:48.280553       6 main.go:107] "successfully validated configuration, accepting" ingress="kube-system/example-ingress"
	I0918 19:51:48.298093       6 store.go:440] "Found valid IngressClass" ingress="kube-system/example-ingress" ingressclass="nginx"
	W0918 19:51:48.298478       6 controller.go:1110] Error obtaining Endpoints for Service "kube-system/hello-world-app": no object matching key "kube-system/hello-world-app" in local store
	I0918 19:51:48.298561       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0918 19:51:48.306246       6 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"kube-system", Name:"example-ingress", UID:"59b67d79-41d2-4e2d-99cd-04f99addd7c8", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2808", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0918 19:51:48.413837       6 controller.go:213] "Backend successfully reloaded"
	I0918 19:51:48.414398       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-85r62", UID:"62a4f29e-5f4d-45fc-8a8e-b8987029ac81", APIVersion:"v1", ResourceVersion:"1252", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	10.244.0.1 - - [18/Sep/2024:19:51:47 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.81.0" 81 0.001 [default-nginx-80] [] 10.244.0.31:80 615 0.001 200 73850050de79f5e412cbaba4a78632d5
	
	
	==> coredns [4bb2f7fc8d15] <==
	[INFO] 10.244.0.7:52054 - 34547 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096909s
	[INFO] 10.244.0.7:56652 - 40345 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002470414s
	[INFO] 10.244.0.7:56652 - 43620 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002364513s
	[INFO] 10.244.0.7:59068 - 23895 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138878s
	[INFO] 10.244.0.7:59068 - 33369 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000317509s
	[INFO] 10.244.0.7:45545 - 54013 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000149889s
	[INFO] 10.244.0.7:45545 - 16122 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000280143s
	[INFO] 10.244.0.7:58762 - 63616 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000090214s
	[INFO] 10.244.0.7:58762 - 42116 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00016597s
	[INFO] 10.244.0.7:35784 - 27011 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057459s
	[INFO] 10.244.0.7:35784 - 4541 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000114927s
	[INFO] 10.244.0.7:46635 - 6507 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001490279s
	[INFO] 10.244.0.7:46635 - 54632 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001398097s
	[INFO] 10.244.0.7:43296 - 50435 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000086834s
	[INFO] 10.244.0.7:43296 - 17920 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000042469s
	[INFO] 10.244.0.25:47286 - 9272 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000365266s
	[INFO] 10.244.0.25:33969 - 27542 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000174465s
	[INFO] 10.244.0.25:57028 - 36082 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000167802s
	[INFO] 10.244.0.25:40374 - 50158 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000899867s
	[INFO] 10.244.0.25:38367 - 41125 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000195888s
	[INFO] 10.244.0.25:56918 - 14670 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156027s
	[INFO] 10.244.0.25:36978 - 15491 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003010469s
	[INFO] 10.244.0.25:33782 - 1131 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002287732s
	[INFO] 10.244.0.25:58233 - 63159 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001857015s
	[INFO] 10.244.0.25:60355 - 59516 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001095271s
	
	
	==> describe nodes <==
	Name:               addons-923322
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-923322
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=addons-923322
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T19_39_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-923322
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 19:38:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-923322
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 19:51:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 19:47:40 +0000   Wed, 18 Sep 2024 19:38:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 19:47:40 +0000   Wed, 18 Sep 2024 19:38:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 19:47:40 +0000   Wed, 18 Sep 2024 19:38:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 19:47:40 +0000   Wed, 18 Sep 2024 19:38:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-923322
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 7230010990fe4ea0a589fc7937513d5d
	  System UUID:                fdce7829-22ca-4a4d-8fcd-8b54819b5e49
	  Boot ID:                    89948b1e-c5b8-41d2-bbb3-b80b856868d6
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  default                     cloud-spanner-emulator-769b77f747-pkc8f     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     hello-world-app-55bf9c44b4-lzqv9            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  gcp-auth                    gcp-auth-89d5ffd79-x4mf2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-85r62    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-2g4l7                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-923322                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-923322                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-923322       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-c2h5g                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-923322                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-cddcv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-94sjr     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-4wvqd              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (4%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-923322 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-923322 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-923322 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-923322 event: Registered Node addons-923322 in Controller
	
	
	==> dmesg <==
	[Sep18 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015410] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.490719] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.720496] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.132493] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [3fae247a1869] <==
	{"level":"info","ts":"2024-09-18T19:38:54.127567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-18T19:38:54.127730Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-18T19:38:54.895295Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-18T19:38:54.895511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-18T19:38:54.895638Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-18T19:38:54.895763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-18T19:38:54.895858Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-18T19:38:54.895963Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-18T19:38:54.896074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-18T19:38:54.899385Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:54.907468Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-923322 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-18T19:38:54.907809Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T19:38:54.907994Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:54.908308Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:54.908449Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-18T19:38:54.908034Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-18T19:38:54.908057Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-18T19:38:54.909080Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-18T19:38:54.909798Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T19:38:54.910806Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-18T19:38:54.931988Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-18T19:38:54.933258Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-18T19:48:55.274959Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1856}
	{"level":"info","ts":"2024-09-18T19:48:55.326157Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1856,"took":"50.363715ms","hash":2043616100,"current-db-size-bytes":8851456,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4837376,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-18T19:48:55.326214Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2043616100,"revision":1856,"compact-revision":-1}
	
	
	==> gcp-auth [e19eb0bfe303] <==
	2024/09/18 19:41:50 GCP Auth Webhook started!
	2024/09/18 19:42:09 Ready to marshal response ...
	2024/09/18 19:42:09 Ready to write response ...
	2024/09/18 19:42:09 Ready to marshal response ...
	2024/09/18 19:42:09 Ready to write response ...
	2024/09/18 19:42:32 Ready to marshal response ...
	2024/09/18 19:42:32 Ready to write response ...
	2024/09/18 19:42:33 Ready to marshal response ...
	2024/09/18 19:42:33 Ready to write response ...
	2024/09/18 19:42:33 Ready to marshal response ...
	2024/09/18 19:42:33 Ready to write response ...
	2024/09/18 19:50:46 Ready to marshal response ...
	2024/09/18 19:50:46 Ready to write response ...
	2024/09/18 19:50:47 Ready to marshal response ...
	2024/09/18 19:50:47 Ready to write response ...
	2024/09/18 19:51:02 Ready to marshal response ...
	2024/09/18 19:51:02 Ready to write response ...
	2024/09/18 19:51:37 Ready to marshal response ...
	2024/09/18 19:51:37 Ready to write response ...
	2024/09/18 19:51:48 Ready to marshal response ...
	2024/09/18 19:51:48 Ready to write response ...
	
	
	==> kernel <==
	 19:51:51 up 34 min,  0 users,  load average: 1.67, 0.86, 0.71
	Linux addons-923322 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [4e05a51d5d38] <==
	W0918 19:42:24.752742       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0918 19:42:24.829997       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0918 19:42:24.840343       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0918 19:42:25.315830       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0918 19:42:25.461594       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0918 19:50:54.985560       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0918 19:51:17.916955       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:51:17.916997       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:51:17.938163       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:51:17.938204       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:51:17.952061       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:51:17.952364       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:51:17.991611       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:51:17.991662       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 19:51:18.034771       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 19:51:18.037248       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0918 19:51:18.938573       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0918 19:51:19.038702       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0918 19:51:19.102883       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0918 19:51:31.654386       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0918 19:51:32.785440       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0918 19:51:37.257903       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0918 19:51:37.591305       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.3.16"}
	I0918 19:51:46.966522       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0918 19:51:48.593459       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.31.192"}
	
	
	==> kube-controller-manager [a9fec9e8cc3f] <==
	W0918 19:51:36.342731       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:36.342782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:36.619536       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:36.619600       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:37.576997       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:37.577057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:39.309641       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:39.309681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:39.619618       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:39.619666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:39.652312       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:39.652353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:40.658663       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:40.658838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0918 19:51:41.606709       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:41.606760       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:51:41.827439       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0918 19:51:48.276450       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="50.104966ms"
	I0918 19:51:48.339857       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="63.356091ms"
	I0918 19:51:48.339933       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="39.91µs"
	I0918 19:51:48.994412       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.128µs"
	W0918 19:51:49.868713       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:51:49.868763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0918 19:51:50.729033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.006764ms"
	I0918 19:51:50.729142       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="67.336µs"
	
	
	==> kube-proxy [208ba88a814b] <==
	I0918 19:39:06.453455       1 server_linux.go:66] "Using iptables proxy"
	I0918 19:39:06.606842       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0918 19:39:06.606904       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0918 19:39:06.665750       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0918 19:39:06.665810       1 server_linux.go:169] "Using iptables Proxier"
	I0918 19:39:06.669967       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0918 19:39:06.670272       1 server.go:483] "Version info" version="v1.31.1"
	I0918 19:39:06.670285       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:39:06.696607       1 config.go:199] "Starting service config controller"
	I0918 19:39:06.696645       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0918 19:39:06.696672       1 config.go:105] "Starting endpoint slice config controller"
	I0918 19:39:06.696676       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0918 19:39:06.699658       1 config.go:328] "Starting node config controller"
	I0918 19:39:06.699675       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0918 19:39:06.796782       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0918 19:39:06.796850       1 shared_informer.go:320] Caches are synced for service config
	I0918 19:39:06.800826       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [eb06e11940d5] <==
	W0918 19:38:57.900689       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0918 19:38:57.900828       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:38:57.900853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.900899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 19:38:57.900913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.900954       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 19:38:57.900970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.901013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:38:57.901024       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.901074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:38:57.901090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.901135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:38:57.901151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.901217       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 19:38:57.901231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.901288       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:38:57.901302       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.901361       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 19:38:57.901375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0918 19:38:57.901446       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.900743       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 19:38:57.901572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0918 19:38:57.900788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 19:38:57.901663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0918 19:38:59.088360       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.018014    2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/633c3e04-6499-4a0c-8b85-df14b292d711-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "633c3e04-6499-4a0c-8b85-df14b292d711" (UID: "633c3e04-6499-4a0c-8b85-df14b292d711"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.022716    2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633c3e04-6499-4a0c-8b85-df14b292d711-kube-api-access-wzh47" (OuterVolumeSpecName: "kube-api-access-wzh47") pod "633c3e04-6499-4a0c-8b85-df14b292d711" (UID: "633c3e04-6499-4a0c-8b85-df14b292d711"). InnerVolumeSpecName "kube-api-access-wzh47". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.120308    2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wzh47\" (UniqueName: \"kubernetes.io/projected/633c3e04-6499-4a0c-8b85-df14b292d711-kube-api-access-wzh47\") on node \"addons-923322\" DevicePath \"\""
	Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.120347    2362 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/633c3e04-6499-4a0c-8b85-df14b292d711-gcp-creds\") on node \"addons-923322\" DevicePath \"\""
	Sep 18 19:51:48 addons-923322 kubelet[2362]: E0918 19:51:48.274805    2362 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cffcd439-fa27-4718-a834-9509d4c523dd" containerName="gadget"
	Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.278008    2362 memory_manager.go:354] "RemoveStaleState removing state" podUID="cffcd439-fa27-4718-a834-9509d4c523dd" containerName="gadget"
	Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.425389    2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5ad44129-ae8e-4938-8cb2-8ed92072b2e5-gcp-creds\") pod \"hello-world-app-55bf9c44b4-lzqv9\" (UID: \"5ad44129-ae8e-4938-8cb2-8ed92072b2e5\") " pod="default/hello-world-app-55bf9c44b4-lzqv9"
	Sep 18 19:51:48 addons-923322 kubelet[2362]: I0918 19:51:48.425491    2362 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chx49\" (UniqueName: \"kubernetes.io/projected/5ad44129-ae8e-4938-8cb2-8ed92072b2e5-kube-api-access-chx49\") pod \"hello-world-app-55bf9c44b4-lzqv9\" (UID: \"5ad44129-ae8e-4938-8cb2-8ed92072b2e5\") " pod="default/hello-world-app-55bf9c44b4-lzqv9"
	Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.548781    2362 scope.go:117] "RemoveContainer" containerID="2f5e92316cafdab025dfa1c5f164e8e01cca4bd2a706c10581755d47ad92b385"
	Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.683546    2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzcfz\" (UniqueName: \"kubernetes.io/projected/be6aeece-e555-4628-88de-f374e1e78aa3-kube-api-access-tzcfz\") pod \"be6aeece-e555-4628-88de-f374e1e78aa3\" (UID: \"be6aeece-e555-4628-88de-f374e1e78aa3\") "
	Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.704428    2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be6aeece-e555-4628-88de-f374e1e78aa3-kube-api-access-tzcfz" (OuterVolumeSpecName: "kube-api-access-tzcfz") pod "be6aeece-e555-4628-88de-f374e1e78aa3" (UID: "be6aeece-e555-4628-88de-f374e1e78aa3"). InnerVolumeSpecName "kube-api-access-tzcfz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.784636    2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tzcfz\" (UniqueName: \"kubernetes.io/projected/be6aeece-e555-4628-88de-f374e1e78aa3-kube-api-access-tzcfz\") on node \"addons-923322\" DevicePath \"\""
	Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.887076    2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxdzq\" (UniqueName: \"kubernetes.io/projected/e2a2228e-559d-447a-953c-77300e373ad5-kube-api-access-mxdzq\") pod \"e2a2228e-559d-447a-953c-77300e373ad5\" (UID: \"e2a2228e-559d-447a-953c-77300e373ad5\") "
	Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.896488    2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a2228e-559d-447a-953c-77300e373ad5-kube-api-access-mxdzq" (OuterVolumeSpecName: "kube-api-access-mxdzq") pod "e2a2228e-559d-447a-953c-77300e373ad5" (UID: "e2a2228e-559d-447a-953c-77300e373ad5"). InnerVolumeSpecName "kube-api-access-mxdzq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:51:49 addons-923322 kubelet[2362]: I0918 19:51:49.988034    2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mxdzq\" (UniqueName: \"kubernetes.io/projected/e2a2228e-559d-447a-953c-77300e373ad5-kube-api-access-mxdzq\") on node \"addons-923322\" DevicePath \"\""
	Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.106917    2362 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="633c3e04-6499-4a0c-8b85-df14b292d711" path="/var/lib/kubelet/pods/633c3e04-6499-4a0c-8b85-df14b292d711/volumes"
	Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.394803    2362 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44qhd\" (UniqueName: \"kubernetes.io/projected/22538dc0-3ac3-4849-83e9-9fc02c69f1d9-kube-api-access-44qhd\") pod \"22538dc0-3ac3-4849-83e9-9fc02c69f1d9\" (UID: \"22538dc0-3ac3-4849-83e9-9fc02c69f1d9\") "
	Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.396991    2362 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22538dc0-3ac3-4849-83e9-9fc02c69f1d9-kube-api-access-44qhd" (OuterVolumeSpecName: "kube-api-access-44qhd") pod "22538dc0-3ac3-4849-83e9-9fc02c69f1d9" (UID: "22538dc0-3ac3-4849-83e9-9fc02c69f1d9"). InnerVolumeSpecName "kube-api-access-44qhd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.496063    2362 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-44qhd\" (UniqueName: \"kubernetes.io/projected/22538dc0-3ac3-4849-83e9-9fc02c69f1d9-kube-api-access-44qhd\") on node \"addons-923322\" DevicePath \"\""
	Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.618756    2362 scope.go:117] "RemoveContainer" containerID="371b94d41c801840c3dd27d8e6226905087b8f9c9b99cbaff78ce754c5db6c64"
	Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.666300    2362 scope.go:117] "RemoveContainer" containerID="16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc"
	Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.700932    2362 scope.go:117] "RemoveContainer" containerID="16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc"
	Sep 18 19:51:50 addons-923322 kubelet[2362]: E0918 19:51:50.702518    2362 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc" containerID="16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc"
	Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.702552    2362 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc"} err="failed to get container status \"16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc\": rpc error: code = Unknown desc = Error response from daemon: No such container: 16da3983a19504c0f855d07ca822776702efc4d354c269e0d4e48c9079a608bc"
	Sep 18 19:51:50 addons-923322 kubelet[2362]: I0918 19:51:50.731462    2362 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-lzqv9" podStartSLOduration=1.777165646 podStartE2EDuration="2.731441375s" podCreationTimestamp="2024-09-18 19:51:48 +0000 UTC" firstStartedPulling="2024-09-18 19:51:49.206515985 +0000 UTC m=+769.310666271" lastFinishedPulling="2024-09-18 19:51:50.160791656 +0000 UTC m=+770.264942000" observedRunningTime="2024-09-18 19:51:50.691851755 +0000 UTC m=+770.796002042" watchObservedRunningTime="2024-09-18 19:51:50.731441375 +0000 UTC m=+770.835591662"
	
	
	==> storage-provisioner [fa048570e948] <==
	I0918 19:39:12.937170       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:39:12.956577       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:39:12.956625       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:39:12.971139       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:39:12.971272       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35ff7f37-9809-4c37-8770-4de917523087", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-923322_6b196dd7-83ac-448c-bc47-d2c005a5acbb became leader
	I0918 19:39:12.971421       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-923322_6b196dd7-83ac-448c-bc47-d2c005a5acbb!
	I0918 19:39:13.071616       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-923322_6b196dd7-83ac-448c-bc47-d2c005a5acbb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-923322 -n addons-923322
helpers_test.go:261: (dbg) Run:  kubectl --context addons-923322 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-923322 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-923322 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-923322/192.168.49.2
	Start Time:       Wed, 18 Sep 2024 19:42:33 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vrxnf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-vrxnf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m19s                   default-scheduler  Successfully assigned default/busybox to addons-923322
	  Warning  Failed     7m55s (x6 over 9m18s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m44s (x4 over 9m19s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m44s (x4 over 9m19s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m44s (x4 over 9m19s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m17s (x21 over 9m18s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (75.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdspecific-port552550871/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (558.063515ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:57:24.659054    7565 retry.go:31] will retry after 726.180952ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.254628ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:57:25.725926    7565 retry.go:31] will retry after 1.117087061s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (342.552577ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:57:27.186253    7565 retry.go:31] will retry after 737.130802ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (352.700728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:57:28.277263    7565 retry.go:31] will retry after 1.863039018s: exit status 1
2024/09/18 19:57:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (382.503192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:57:30.527382    7565 retry.go:31] will retry after 2.496720191s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (406.156962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:57:33.431328    7565 retry.go:31] will retry after 2.433468817s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (344.181842ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 12.108687886s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (342.311632ms)

                                                
                                                
-- stdout --
	total 8
	drwxr-xr-x 2 root root 4096 Sep 18 19:57 .
	drwxr-xr-x 1 root root 4096 Sep 18 19:57 ..
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-arm64 -p functional-325340 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "sudo umount -f /mount-9p": exit status 1 (336.240844ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-325340 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdspecific-port552550871/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdspecific-port552550871/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdspecific-port552550871/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I0918 19:57:24.209758   50107 out.go:345] Setting OutFile to fd 1 ...
I0918 19:57:24.209993   50107 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:24.210005   50107 out.go:358] Setting ErrFile to fd 2...
I0918 19:57:24.210012   50107 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:24.210761   50107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
I0918 19:57:24.211619   50107 mustload.go:65] Loading cluster: functional-325340
I0918 19:57:24.212567   50107 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:57:24.214401   50107 cli_runner.go:164] Run: docker container inspect functional-325340 --format={{.State.Status}}
I0918 19:57:24.264260   50107 host.go:66] Checking if "functional-325340" exists ...
I0918 19:57:24.264981   50107 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0918 19:57:24.462749   50107 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-18 19:57:24.444949642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
I0918 19:57:24.463020   50107 cli_runner.go:164] Run: docker network inspect functional-325340 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0918 19:57:24.517768   50107 out.go:201] 
W0918 19:57:24.520681   50107 out.go:270] X Exiting due to IF_MOUNT_PORT: Error finding port for mount: Error accessing port 46464
X Exiting due to IF_MOUNT_PORT: Error finding port for mount: Error accessing port 46464
W0918 19:57:24.520709   50107 out.go:270] * 
* 
W0918 19:57:24.527387   50107 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_30dcac30c7f56bcf6a9c2f52e657153365bf43f9_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_30dcac30c7f56bcf6a9c2f52e657153365bf43f9_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0918 19:57:24.533129   50107 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (12.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (374.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-959748 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0918 20:44:14.861045    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-959748 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: exit status 102 (6m9.272951767s)

                                                
                                                
-- stdout --
	* [old-k8s-version-959748] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-959748" primary control-plane node in "old-k8s-version-959748" cluster
	* Pulling base image v0.0.45-1726589491-19662 ...
	* Restarting existing docker container for "old-k8s-version-959748" ...
	* Preparing Kubernetes v1.20.0 on Docker 27.2.1 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-959748 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:43:50.631315  321053 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:43:50.631534  321053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:43:50.631562  321053 out.go:358] Setting ErrFile to fd 2...
	I0918 20:43:50.631581  321053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:43:50.631892  321053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
	I0918 20:43:50.632373  321053 out.go:352] Setting JSON to false
	I0918 20:43:50.633516  321053 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5178,"bootTime":1726687053,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0918 20:43:50.633626  321053 start.go:139] virtualization:  
	I0918 20:43:50.638435  321053 out.go:177] * [old-k8s-version-959748] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0918 20:43:50.641392  321053 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:43:50.641468  321053 notify.go:220] Checking for updates...
	I0918 20:43:50.644978  321053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:43:50.647998  321053 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	I0918 20:43:50.650706  321053 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	I0918 20:43:50.653381  321053 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 20:43:50.656433  321053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:43:50.660428  321053 config.go:182] Loaded profile config "old-k8s-version-959748": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0918 20:43:50.667484  321053 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0918 20:43:50.670420  321053 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:43:50.717414  321053 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 20:43:50.717551  321053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:43:50.809314  321053 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-18 20:43:50.79848704 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:43:50.809428  321053 docker.go:318] overlay module found
	I0918 20:43:50.812627  321053 out.go:177] * Using the docker driver based on existing profile
	I0918 20:43:50.815654  321053 start.go:297] selected driver: docker
	I0918 20:43:50.815676  321053 start.go:901] validating driver "docker" against &{Name:old-k8s-version-959748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-959748 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:43:50.815799  321053 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:43:50.816389  321053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:43:50.935071  321053 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-18 20:43:50.92211318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:43:50.935528  321053 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:43:50.935559  321053 cni.go:84] Creating CNI manager for ""
	I0918 20:43:50.935601  321053 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 20:43:50.935645  321053 start.go:340] cluster config:
	{Name:old-k8s-version-959748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-959748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:43:50.938967  321053 out.go:177] * Starting "old-k8s-version-959748" primary control-plane node in "old-k8s-version-959748" cluster
	I0918 20:43:50.941705  321053 cache.go:121] Beginning downloading kic base image for docker with docker
	I0918 20:43:50.944633  321053 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0918 20:43:50.947365  321053 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 20:43:50.947427  321053 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0918 20:43:50.947460  321053 cache.go:56] Caching tarball of preloaded images
	I0918 20:43:50.947564  321053 preload.go:172] Found /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 20:43:50.947579  321053 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0918 20:43:50.947708  321053 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/config.json ...
	I0918 20:43:50.947954  321053 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	W0918 20:43:50.978186  321053 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0918 20:43:50.978215  321053 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 20:43:50.978300  321053 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 20:43:50.978324  321053 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0918 20:43:50.978329  321053 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0918 20:43:50.978338  321053 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0918 20:43:50.978350  321053 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0918 20:43:51.136886  321053 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0918 20:43:51.136927  321053 cache.go:194] Successfully downloaded all kic artifacts
	I0918 20:43:51.136967  321053 start.go:360] acquireMachinesLock for old-k8s-version-959748: {Name:mk82f8ebff333325448fcd4e48f49b320d13268a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:43:51.137041  321053 start.go:364] duration metric: took 45.775µs to acquireMachinesLock for "old-k8s-version-959748"
	I0918 20:43:51.137074  321053 start.go:96] Skipping create...Using existing machine configuration
	I0918 20:43:51.137086  321053 fix.go:54] fixHost starting: 
	I0918 20:43:51.137421  321053 cli_runner.go:164] Run: docker container inspect old-k8s-version-959748 --format={{.State.Status}}
	I0918 20:43:51.168802  321053 fix.go:112] recreateIfNeeded on old-k8s-version-959748: state=Stopped err=<nil>
	W0918 20:43:51.168836  321053 fix.go:138] unexpected machine state, will restart: <nil>
	I0918 20:43:51.173608  321053 out.go:177] * Restarting existing docker container for "old-k8s-version-959748" ...
	I0918 20:43:51.176362  321053 cli_runner.go:164] Run: docker start old-k8s-version-959748
	I0918 20:43:51.609116  321053 cli_runner.go:164] Run: docker container inspect old-k8s-version-959748 --format={{.State.Status}}
	I0918 20:43:51.644816  321053 kic.go:430] container "old-k8s-version-959748" state is running.
	I0918 20:43:51.645311  321053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-959748
	I0918 20:43:51.681933  321053 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/config.json ...
	I0918 20:43:51.682180  321053 machine.go:93] provisionDockerMachine start ...
	I0918 20:43:51.682246  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:51.711123  321053 main.go:141] libmachine: Using SSH client type: native
	I0918 20:43:51.711410  321053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I0918 20:43:51.711427  321053 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 20:43:51.712114  321053 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54168->127.0.0.1:33081: read: connection reset by peer
	I0918 20:43:54.858972  321053 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-959748
	
	I0918 20:43:54.859001  321053 ubuntu.go:169] provisioning hostname "old-k8s-version-959748"
	I0918 20:43:54.859095  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:54.878058  321053 main.go:141] libmachine: Using SSH client type: native
	I0918 20:43:54.878307  321053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I0918 20:43:54.878319  321053 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-959748 && echo "old-k8s-version-959748" | sudo tee /etc/hostname
	I0918 20:43:55.062774  321053 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-959748
	
	I0918 20:43:55.062867  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:55.091401  321053 main.go:141] libmachine: Using SSH client type: native
	I0918 20:43:55.091661  321053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I0918 20:43:55.091679  321053 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-959748' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-959748/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-959748' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:43:55.247850  321053 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:43:55.247879  321053 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19667-2236/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-2236/.minikube}
	I0918 20:43:55.247911  321053 ubuntu.go:177] setting up certificates
	I0918 20:43:55.247922  321053 provision.go:84] configureAuth start
	I0918 20:43:55.247997  321053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-959748
	I0918 20:43:55.270473  321053 provision.go:143] copyHostCerts
	I0918 20:43:55.270556  321053 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-2236/.minikube/ca.pem, removing ...
	I0918 20:43:55.270570  321053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-2236/.minikube/ca.pem
	I0918 20:43:55.270655  321053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/ca.pem (1078 bytes)
	I0918 20:43:55.270767  321053 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-2236/.minikube/cert.pem, removing ...
	I0918 20:43:55.270780  321053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-2236/.minikube/cert.pem
	I0918 20:43:55.270812  321053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/cert.pem (1123 bytes)
	I0918 20:43:55.270877  321053 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-2236/.minikube/key.pem, removing ...
	I0918 20:43:55.270886  321053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-2236/.minikube/key.pem
	I0918 20:43:55.270913  321053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/key.pem (1675 bytes)
	I0918 20:43:55.270966  321053 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-959748 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-959748]
	I0918 20:43:55.448020  321053 provision.go:177] copyRemoteCerts
	I0918 20:43:55.448090  321053 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:43:55.448141  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:55.466266  321053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/old-k8s-version-959748/id_rsa Username:docker}
	I0918 20:43:55.568925  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0918 20:43:55.600269  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0918 20:43:55.628520  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0918 20:43:55.654939  321053 provision.go:87] duration metric: took 406.998827ms to configureAuth
	I0918 20:43:55.655020  321053 ubuntu.go:193] setting minikube options for container-runtime
	I0918 20:43:55.655284  321053 config.go:182] Loaded profile config "old-k8s-version-959748": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0918 20:43:55.655369  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:55.673641  321053 main.go:141] libmachine: Using SSH client type: native
	I0918 20:43:55.674019  321053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I0918 20:43:55.674045  321053 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 20:43:55.832861  321053 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0918 20:43:55.832930  321053 ubuntu.go:71] root file system type: overlay
	I0918 20:43:55.833084  321053 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 20:43:55.833216  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:55.852882  321053 main.go:141] libmachine: Using SSH client type: native
	I0918 20:43:55.853155  321053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I0918 20:43:55.853238  321053 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 20:43:56.013474  321053 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 20:43:56.013642  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:56.040028  321053 main.go:141] libmachine: Using SSH client type: native
	I0918 20:43:56.040283  321053 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33081 <nil> <nil>}
	I0918 20:43:56.040308  321053 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 20:43:56.217395  321053 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:43:56.217473  321053 machine.go:96] duration metric: took 4.535275005s to provisionDockerMachine
	I0918 20:43:56.217499  321053 start.go:293] postStartSetup for "old-k8s-version-959748" (driver="docker")
	I0918 20:43:56.217539  321053 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:43:56.217650  321053 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:43:56.217734  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:56.236760  321053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/old-k8s-version-959748/id_rsa Username:docker}
	I0918 20:43:56.344948  321053 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:43:56.348566  321053 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 20:43:56.348659  321053 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 20:43:56.348683  321053 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 20:43:56.348692  321053 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0918 20:43:56.348715  321053 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-2236/.minikube/addons for local assets ...
	I0918 20:43:56.348776  321053 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-2236/.minikube/files for local assets ...
	I0918 20:43:56.348907  321053 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-2236/.minikube/files/etc/ssl/certs/75652.pem -> 75652.pem in /etc/ssl/certs
	I0918 20:43:56.349021  321053 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:43:56.358886  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/files/etc/ssl/certs/75652.pem --> /etc/ssl/certs/75652.pem (1708 bytes)
	I0918 20:43:56.386551  321053 start.go:296] duration metric: took 169.021973ms for postStartSetup
	I0918 20:43:56.386642  321053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:43:56.386693  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:56.405426  321053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/old-k8s-version-959748/id_rsa Username:docker}
	I0918 20:43:56.504674  321053 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 20:43:56.511206  321053 fix.go:56] duration metric: took 5.374111747s for fixHost
	I0918 20:43:56.511233  321053 start.go:83] releasing machines lock for "old-k8s-version-959748", held for 5.374172185s
	I0918 20:43:56.511368  321053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-959748
	I0918 20:43:56.528834  321053 ssh_runner.go:195] Run: cat /version.json
	I0918 20:43:56.528905  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:56.529079  321053 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:43:56.529173  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:56.550392  321053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/old-k8s-version-959748/id_rsa Username:docker}
	I0918 20:43:56.568393  321053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/old-k8s-version-959748/id_rsa Username:docker}
	I0918 20:43:56.646926  321053 ssh_runner.go:195] Run: systemctl --version
	I0918 20:43:56.781003  321053 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 20:43:56.785686  321053 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0918 20:43:56.806331  321053 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0918 20:43:56.806483  321053 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0918 20:43:56.827841  321053 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0918 20:43:56.846457  321053 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 20:43:56.846541  321053 start.go:495] detecting cgroup driver to use...
	I0918 20:43:56.846592  321053 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 20:43:56.846713  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:43:56.865185  321053 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0918 20:43:56.875805  321053 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 20:43:56.886359  321053 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 20:43:56.886485  321053 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 20:43:56.897476  321053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 20:43:56.908620  321053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 20:43:56.923753  321053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 20:43:56.935815  321053 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:43:56.945413  321053 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 20:43:56.956135  321053 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:43:56.965435  321053 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:43:56.974482  321053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:43:57.072153  321053 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 20:43:57.197403  321053 start.go:495] detecting cgroup driver to use...
	I0918 20:43:57.197490  321053 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 20:43:57.197577  321053 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 20:43:57.218737  321053 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0918 20:43:57.218864  321053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 20:43:57.231804  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:43:57.249090  321053 ssh_runner.go:195] Run: which cri-dockerd
	I0918 20:43:57.253275  321053 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 20:43:57.262949  321053 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0918 20:43:57.283799  321053 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 20:43:57.394651  321053 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 20:43:57.503579  321053 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 20:43:57.503756  321053 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0918 20:43:57.529661  321053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:43:57.645758  321053 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 20:43:58.129263  321053 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 20:43:58.157916  321053 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 20:43:58.185830  321053 out.go:235] * Preparing Kubernetes v1.20.0 on Docker 27.2.1 ...
	I0918 20:43:58.185940  321053 cli_runner.go:164] Run: docker network inspect old-k8s-version-959748 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 20:43:58.204426  321053 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0918 20:43:58.208755  321053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:43:58.221355  321053 kubeadm.go:883] updating cluster {Name:old-k8s-version-959748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-959748 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:43:58.221502  321053 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 20:43:58.221561  321053 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 20:43:58.242647  321053 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	registry.k8s.io/pause:3.2
	k8s.gcr.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0918 20:43:58.242676  321053 docker.go:615] Images already preloaded, skipping extraction
	I0918 20:43:58.242748  321053 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 20:43:58.265686  321053 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0918 20:43:58.265717  321053 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:43:58.265728  321053 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 docker true true} ...
	I0918 20:43:58.265842  321053 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-959748 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-959748 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:43:58.265920  321053 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 20:43:58.320220  321053 cni.go:84] Creating CNI manager for ""
	I0918 20:43:58.320266  321053 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 20:43:58.320276  321053 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:43:58.320328  321053 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-959748 NodeName:old-k8s-version-959748 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 20:43:58.320537  321053 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-959748"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:43:58.320614  321053 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0918 20:43:58.330199  321053 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:43:58.330273  321053 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 20:43:58.339639  321053 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0918 20:43:58.359984  321053 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:43:58.381761  321053 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I0918 20:43:58.401069  321053 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0918 20:43:58.405375  321053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:43:58.418502  321053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:43:58.513309  321053 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:43:58.532939  321053 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748 for IP: 192.168.76.2
	I0918 20:43:58.532961  321053 certs.go:194] generating shared ca certs ...
	I0918 20:43:58.532978  321053 certs.go:226] acquiring lock for ca certs: {Name:mk958e02b356056556309ee300f2f34fdfb18284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:43:58.533121  321053 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key
	I0918 20:43:58.533174  321053 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key
	I0918 20:43:58.533186  321053 certs.go:256] generating profile certs ...
	I0918 20:43:58.533289  321053 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.key
	I0918 20:43:58.533363  321053 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/apiserver.key.9e18f7be
	I0918 20:43:58.533407  321053 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/proxy-client.key
	I0918 20:43:58.533520  321053 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/7565.pem (1338 bytes)
	W0918 20:43:58.533555  321053 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-2236/.minikube/certs/7565_empty.pem, impossibly tiny 0 bytes
	I0918 20:43:58.533567  321053 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 20:43:58.533607  321053 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem (1078 bytes)
	I0918 20:43:58.533633  321053 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:43:58.533662  321053 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem (1675 bytes)
	I0918 20:43:58.533715  321053 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/files/etc/ssl/certs/75652.pem (1708 bytes)
	I0918 20:43:58.534320  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:43:58.565366  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 20:43:58.596425  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:43:58.627603  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 20:43:58.668321  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0918 20:43:58.708680  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 20:43:58.752039  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:43:58.791619  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 20:43:58.835414  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:43:58.870987  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/certs/7565.pem --> /usr/share/ca-certificates/7565.pem (1338 bytes)
	I0918 20:43:58.897693  321053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/files/etc/ssl/certs/75652.pem --> /usr/share/ca-certificates/75652.pem (1708 bytes)
	I0918 20:43:58.927464  321053 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:43:58.948267  321053 ssh_runner.go:195] Run: openssl version
	I0918 20:43:58.954049  321053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:43:58.963955  321053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:43:58.967737  321053 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:43:58.967849  321053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:43:58.975078  321053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:43:58.984659  321053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7565.pem && ln -fs /usr/share/ca-certificates/7565.pem /etc/ssl/certs/7565.pem"
	I0918 20:43:58.995053  321053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7565.pem
	I0918 20:43:58.999137  321053 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:53 /usr/share/ca-certificates/7565.pem
	I0918 20:43:58.999306  321053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7565.pem
	I0918 20:43:59.006928  321053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7565.pem /etc/ssl/certs/51391683.0"
	I0918 20:43:59.017034  321053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75652.pem && ln -fs /usr/share/ca-certificates/75652.pem /etc/ssl/certs/75652.pem"
	I0918 20:43:59.032000  321053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75652.pem
	I0918 20:43:59.036262  321053 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:53 /usr/share/ca-certificates/75652.pem
	I0918 20:43:59.036331  321053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75652.pem
	I0918 20:43:59.044029  321053 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75652.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:43:59.053967  321053 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:43:59.057888  321053 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0918 20:43:59.065186  321053 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0918 20:43:59.073128  321053 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0918 20:43:59.081778  321053 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0918 20:43:59.089670  321053 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0918 20:43:59.097550  321053 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0918 20:43:59.105035  321053 kubeadm.go:392] StartCluster: {Name:old-k8s-version-959748 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-959748 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:43:59.105189  321053 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 20:43:59.132062  321053 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 20:43:59.146498  321053 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0918 20:43:59.146517  321053 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0918 20:43:59.146573  321053 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0918 20:43:59.156343  321053 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0918 20:43:59.156970  321053 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-959748" does not appear in /home/jenkins/minikube-integration/19667-2236/kubeconfig
	I0918 20:43:59.157249  321053 kubeconfig.go:62] /home/jenkins/minikube-integration/19667-2236/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-959748" cluster setting kubeconfig missing "old-k8s-version-959748" context setting]
	I0918 20:43:59.157706  321053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/kubeconfig: {Name:mk8ee68a7fcf0033412d5c9abf2a4743eba0e82f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:43:59.159517  321053 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0918 20:43:59.169262  321053 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0918 20:43:59.169321  321053 kubeadm.go:597] duration metric: took 22.797177ms to restartPrimaryControlPlane
	I0918 20:43:59.169332  321053 kubeadm.go:394] duration metric: took 64.305634ms to StartCluster
	I0918 20:43:59.169348  321053 settings.go:142] acquiring lock: {Name:mka60e55fdc2e0389e1fbfa23792ee022689e7b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:43:59.169417  321053 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19667-2236/kubeconfig
	I0918 20:43:59.170426  321053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/kubeconfig: {Name:mk8ee68a7fcf0033412d5c9abf2a4743eba0e82f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:43:59.170675  321053 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 20:43:59.171007  321053 config.go:182] Loaded profile config "old-k8s-version-959748": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0918 20:43:59.171064  321053 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0918 20:43:59.171136  321053 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-959748"
	I0918 20:43:59.171152  321053 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-959748"
	W0918 20:43:59.171157  321053 addons.go:243] addon storage-provisioner should already be in state true
	I0918 20:43:59.171179  321053 host.go:66] Checking if "old-k8s-version-959748" exists ...
	I0918 20:43:59.171786  321053 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-959748"
	I0918 20:43:59.171811  321053 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-959748"
	I0918 20:43:59.172095  321053 cli_runner.go:164] Run: docker container inspect old-k8s-version-959748 --format={{.State.Status}}
	I0918 20:43:59.172120  321053 cli_runner.go:164] Run: docker container inspect old-k8s-version-959748 --format={{.State.Status}}
	I0918 20:43:59.172681  321053 addons.go:69] Setting dashboard=true in profile "old-k8s-version-959748"
	I0918 20:43:59.172709  321053 addons.go:234] Setting addon dashboard=true in "old-k8s-version-959748"
	W0918 20:43:59.172717  321053 addons.go:243] addon dashboard should already be in state true
	I0918 20:43:59.172744  321053 host.go:66] Checking if "old-k8s-version-959748" exists ...
	I0918 20:43:59.173241  321053 cli_runner.go:164] Run: docker container inspect old-k8s-version-959748 --format={{.State.Status}}
	I0918 20:43:59.174531  321053 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-959748"
	I0918 20:43:59.177718  321053 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-959748"
	W0918 20:43:59.177737  321053 addons.go:243] addon metrics-server should already be in state true
	I0918 20:43:59.177792  321053 host.go:66] Checking if "old-k8s-version-959748" exists ...
	I0918 20:43:59.178282  321053 cli_runner.go:164] Run: docker container inspect old-k8s-version-959748 --format={{.State.Status}}
	I0918 20:43:59.177663  321053 out.go:177] * Verifying Kubernetes components...
	I0918 20:43:59.182403  321053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:43:59.205973  321053 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-959748"
	W0918 20:43:59.206000  321053 addons.go:243] addon default-storageclass should already be in state true
	I0918 20:43:59.206026  321053 host.go:66] Checking if "old-k8s-version-959748" exists ...
	I0918 20:43:59.206440  321053 cli_runner.go:164] Run: docker container inspect old-k8s-version-959748 --format={{.State.Status}}
	I0918 20:43:59.240077  321053 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 20:43:59.240104  321053 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 20:43:59.240173  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:59.251369  321053 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 20:43:59.254250  321053 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:43:59.254277  321053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 20:43:59.254342  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:59.275458  321053 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0918 20:43:59.275623  321053 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0918 20:43:59.281256  321053 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 20:43:59.281288  321053 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 20:43:59.281372  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:59.285138  321053 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0918 20:43:59.291355  321053 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0918 20:43:59.291394  321053 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0918 20:43:59.291471  321053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-959748
	I0918 20:43:59.297895  321053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/old-k8s-version-959748/id_rsa Username:docker}
	I0918 20:43:59.316436  321053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/old-k8s-version-959748/id_rsa Username:docker}
	I0918 20:43:59.351118  321053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/old-k8s-version-959748/id_rsa Username:docker}
	I0918 20:43:59.360385  321053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33081 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/old-k8s-version-959748/id_rsa Username:docker}
	I0918 20:43:59.370988  321053 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:43:59.426604  321053 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-959748" to be "Ready" ...
	I0918 20:43:59.462377  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:43:59.513128  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 20:43:59.536713  321053 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0918 20:43:59.536735  321053 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0918 20:43:59.568844  321053 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 20:43:59.568869  321053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0918 20:43:59.583183  321053 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0918 20:43:59.583206  321053 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0918 20:43:59.625389  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:43:59.625432  321053 retry.go:31] will retry after 296.584499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:43:59.630487  321053 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0918 20:43:59.630513  321053 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0918 20:43:59.634095  321053 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 20:43:59.634118  321053 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 20:43:59.660695  321053 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0918 20:43:59.660720  321053 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0918 20:43:59.689855  321053 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 20:43:59.689880  321053 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0918 20:43:59.709551  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:43:59.709591  321053 retry.go:31] will retry after 359.014116ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:43:59.714488  321053 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0918 20:43:59.714528  321053 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0918 20:43:59.722246  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 20:43:59.751657  321053 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0918 20:43:59.751682  321053 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0918 20:43:59.772376  321053 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0918 20:43:59.772399  321053 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0918 20:43:59.794119  321053 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0918 20:43:59.794143  321053 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0918 20:43:59.817780  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:43:59.817814  321053 retry.go:31] will retry after 222.556535ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:43:59.817951  321053 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0918 20:43:59.817966  321053 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0918 20:43:59.838131  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0918 20:43:59.922452  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0918 20:43:59.930896  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:43:59.930930  321053 retry.go:31] will retry after 248.561191ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 20:44:00.002588  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:00.002638  321053 retry.go:31] will retry after 539.836919ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:00.047413  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 20:44:00.068913  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0918 20:44:00.194148  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0918 20:44:00.490839  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:00.490881  321053 retry.go:31] will retry after 458.930928ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 20:44:00.498334  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:00.498407  321053 retry.go:31] will retry after 394.165907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 20:44:00.537358  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:00.537512  321053 retry.go:31] will retry after 267.861177ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:00.543299  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0918 20:44:00.677199  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:00.677242  321053 retry.go:31] will retry after 761.252704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:00.806533  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0918 20:44:00.888026  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:00.888060  321053 retry.go:31] will retry after 432.158532ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:00.893267  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0918 20:44:00.950310  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0918 20:44:00.974080  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:00.974115  321053 retry.go:31] will retry after 287.052689ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 20:44:01.038450  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:01.038490  321053 retry.go:31] will retry after 361.518622ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:01.261886  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0918 20:44:01.320653  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0918 20:44:01.357384  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:01.357419  321053 retry.go:31] will retry after 562.19493ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:01.400861  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0918 20:44:01.423495  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:01.423539  321053 retry.go:31] will retry after 693.535677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:01.428136  321053 node_ready.go:53] error getting node "old-k8s-version-959748": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-959748": dial tcp 192.168.76.2:8443: connect: connection refused
	I0918 20:44:01.439429  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0918 20:44:01.574062  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:01.574095  321053 retry.go:31] will retry after 970.749518ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 20:44:01.628693  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:01.628738  321053 retry.go:31] will retry after 523.28323ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:01.920840  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0918 20:44:02.006877  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:02.006912  321053 retry.go:31] will retry after 1.702334794s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:02.118245  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0918 20:44:02.152785  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0918 20:44:02.464771  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:02.464858  321053 retry.go:31] will retry after 1.251041621s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0918 20:44:02.464908  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:02.464920  321053 retry.go:31] will retry after 1.799217384s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:02.545947  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0918 20:44:02.678458  321053 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:02.678495  321053 retry.go:31] will retry after 784.651065ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0918 20:44:03.464004  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 20:44:03.710234  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0918 20:44:03.716595  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0918 20:44:04.265035  321053 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 20:44:11.310959  321053 node_ready.go:49] node "old-k8s-version-959748" has status "Ready":"True"
	I0918 20:44:11.310991  321053 node_ready.go:38] duration metric: took 11.884298336s for node "old-k8s-version-959748" to be "Ready" ...
	I0918 20:44:11.311006  321053 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:44:11.520168  321053 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-wqss6" in "kube-system" namespace to be "Ready" ...
	I0918 20:44:11.665254  321053 pod_ready.go:93] pod "coredns-74ff55c5b-wqss6" in "kube-system" namespace has status "Ready":"True"
	I0918 20:44:11.665332  321053 pod_ready.go:82] duration metric: took 145.052803ms for pod "coredns-74ff55c5b-wqss6" in "kube-system" namespace to be "Ready" ...
	I0918 20:44:11.665358  321053 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-959748" in "kube-system" namespace to be "Ready" ...
	I0918 20:44:13.439009  321053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.974957733s)
	I0918 20:44:13.439053  321053 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-959748"
	I0918 20:44:13.439103  321053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.728722111s)
	I0918 20:44:13.439388  321053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.722685212s)
	I0918 20:44:13.439584  321053 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.174456859s)
	I0918 20:44:13.442568  321053 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-959748 addons enable metrics-server
	
	I0918 20:44:13.455814  321053 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I0918 20:44:13.466194  321053 addons.go:510] duration metric: took 14.295128807s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I0918 20:44:13.673425  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:16.173475  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:18.673159  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:21.171915  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:23.172628  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:25.173099  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:27.676513  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:30.177759  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:32.672231  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:35.173116  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:37.173614  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:39.671932  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:42.172188  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:44.172721  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:46.671462  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:48.674294  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:51.232040  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:53.672723  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:55.703118  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:44:58.174377  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:00.237499  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:02.673195  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:05.174367  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:07.672455  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:09.673222  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:11.673524  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:14.172309  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:16.671504  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:18.672299  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:20.672441  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:23.172136  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:25.177662  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:27.672598  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:30.174321  321053 pod_ready.go:103] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:32.675330  321053 pod_ready.go:93] pod "etcd-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"True"
	I0918 20:45:32.675358  321053 pod_ready.go:82] duration metric: took 1m21.009976551s for pod "etcd-old-k8s-version-959748" in "kube-system" namespace to be "Ready" ...
	I0918 20:45:32.675369  321053 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-959748" in "kube-system" namespace to be "Ready" ...
	I0918 20:45:32.681174  321053 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"True"
	I0918 20:45:32.681205  321053 pod_ready.go:82] duration metric: took 5.827464ms for pod "kube-apiserver-old-k8s-version-959748" in "kube-system" namespace to be "Ready" ...
	I0918 20:45:32.681217  321053 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-959748" in "kube-system" namespace to be "Ready" ...
	I0918 20:45:32.687040  321053 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"True"
	I0918 20:45:32.687063  321053 pod_ready.go:82] duration metric: took 5.839616ms for pod "kube-controller-manager-old-k8s-version-959748" in "kube-system" namespace to be "Ready" ...
	I0918 20:45:32.687077  321053 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-6qhft" in "kube-system" namespace to be "Ready" ...
	I0918 20:45:32.692379  321053 pod_ready.go:93] pod "kube-proxy-6qhft" in "kube-system" namespace has status "Ready":"True"
	I0918 20:45:32.692408  321053 pod_ready.go:82] duration metric: took 5.31655ms for pod "kube-proxy-6qhft" in "kube-system" namespace to be "Ready" ...
	I0918 20:45:32.692420  321053 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-959748" in "kube-system" namespace to be "Ready" ...
	I0918 20:45:34.698760  321053 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:36.700026  321053 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-959748" in "kube-system" namespace has status "Ready":"True"
	I0918 20:45:36.700058  321053 pod_ready.go:82] duration metric: took 4.007630837s for pod "kube-scheduler-old-k8s-version-959748" in "kube-system" namespace to be "Ready" ...
	I0918 20:45:36.700073  321053 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace to be "Ready" ...
	I0918 20:45:38.707392  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:41.207961  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:43.709005  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:46.207221  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:48.214443  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:50.706404  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:52.707469  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:54.708124  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:57.206617  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:45:59.207062  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:01.706335  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:04.206975  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:06.706644  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:09.206539  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:11.206588  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:13.206788  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:15.216520  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:17.706050  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:19.706989  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:21.707078  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:24.206688  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:26.712903  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:29.207007  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:31.706075  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:33.706140  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:35.710799  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:38.206802  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:40.707181  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:43.206601  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:45.216675  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:47.706926  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:50.207055  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:52.226877  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:54.705861  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:56.707133  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:46:59.207719  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:01.706841  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:04.206607  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:06.207143  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:08.706127  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:10.706704  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:12.706807  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:15.208181  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:17.706753  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:20.207095  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:22.706308  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:24.706424  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:26.709296  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:29.207558  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:31.706655  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:33.710222  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:36.206515  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:38.206979  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:40.207620  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:42.208526  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:44.215900  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:46.706530  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:49.207407  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:51.209583  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:53.706061  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:55.706952  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:47:58.206806  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:00.256556  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:02.707116  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:05.207300  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:07.706979  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:10.207494  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:12.706881  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:14.708658  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:17.206562  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:19.206928  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:21.707554  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:24.206674  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:26.207614  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:28.208128  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:30.219562  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:32.706811  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:34.707584  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:37.206659  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:39.206991  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:41.706698  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:44.206941  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:46.207198  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:48.706697  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:51.206096  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:53.207619  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:55.706930  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:48:58.206806  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:00.363754  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:02.706959  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:05.207089  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:07.707208  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:10.207561  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:12.706484  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:14.706697  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:16.707601  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:19.207137  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:21.706322  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:23.715544  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:26.206493  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:28.209332  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:30.217385  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:32.707913  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:34.709527  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:36.705765  321053 pod_ready.go:82] duration metric: took 4m0.005676003s for pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace to be "Ready" ...
	E0918 20:49:36.705795  321053 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0918 20:49:36.705806  321053 pod_ready.go:39] duration metric: took 5m25.394785707s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:49:36.705826  321053 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:49:36.705912  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 20:49:36.734397  321053 logs.go:276] 2 containers: [93890fa25a1b 42d259a39ced]
	I0918 20:49:36.734548  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 20:49:36.755622  321053 logs.go:276] 2 containers: [468c9a428546 ee3fb21586d8]
	I0918 20:49:36.755748  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 20:49:36.777370  321053 logs.go:276] 2 containers: [c469388f2e86 22f71e25c69d]
	I0918 20:49:36.777465  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 20:49:36.796457  321053 logs.go:276] 2 containers: [0b5c2c549d85 d7d92bf388a8]
	I0918 20:49:36.796561  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 20:49:36.818276  321053 logs.go:276] 2 containers: [576f1c60a0bf 2906bf503bae]
	I0918 20:49:36.818413  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 20:49:36.838974  321053 logs.go:276] 2 containers: [d5f549195fea 04f0f084f259]
	I0918 20:49:36.839110  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 20:49:36.873307  321053 logs.go:276] 0 containers: []
	W0918 20:49:36.873379  321053 logs.go:278] No container was found matching "kindnet"
	I0918 20:49:36.873471  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0918 20:49:36.891877  321053 logs.go:276] 1 containers: [57496d116106]
	I0918 20:49:36.891981  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 20:49:36.910982  321053 logs.go:276] 2 containers: [d98463859c48 ebd705b772fc]
	I0918 20:49:36.911014  321053 logs.go:123] Gathering logs for kubelet ...
	I0918 20:49:36.911025  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 20:49:36.969958  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:13 old-k8s-version-959748 kubelet[1382]: E0918 20:44:13.544419    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:36.971458  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:13 old-k8s-version-959748 kubelet[1382]: E0918 20:44:13.821194    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.972124  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:14 old-k8s-version-959748 kubelet[1382]: E0918 20:44:14.888406    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.974834  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:26 old-k8s-version-959748 kubelet[1382]: E0918 20:44:26.455937    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:36.979570  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:37 old-k8s-version-959748 kubelet[1382]: E0918 20:44:37.301443    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:36.979778  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:38 old-k8s-version-959748 kubelet[1382]: E0918 20:44:38.321604    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.979963  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:40 old-k8s-version-959748 kubelet[1382]: E0918 20:44:40.410130    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.980740  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:44 old-k8s-version-959748 kubelet[1382]: E0918 20:44:44.392000    1382 pod_workers.go:191] Error syncing pod 773bb59a-fa3d-4310-a265-018dd10517a1 ("storage-provisioner_kube-system(773bb59a-fa3d-4310-a265-018dd10517a1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(773bb59a-fa3d-4310-a265-018dd10517a1)"
	W0918 20:49:36.982823  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:51 old-k8s-version-959748 kubelet[1382]: E0918 20:44:51.428423    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:36.985433  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:53 old-k8s-version-959748 kubelet[1382]: E0918 20:44:53.973975    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:36.985774  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:06 old-k8s-version-959748 kubelet[1382]: E0918 20:45:06.402554    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.985975  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:09 old-k8s-version-959748 kubelet[1382]: E0918 20:45:09.412890    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.986160  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:21 old-k8s-version-959748 kubelet[1382]: E0918 20:45:21.403236    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.988430  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:24 old-k8s-version-959748 kubelet[1382]: E0918 20:45:24.993892    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:36.990524  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:35 old-k8s-version-959748 kubelet[1382]: E0918 20:45:35.427494    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:36.990727  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:36 old-k8s-version-959748 kubelet[1382]: E0918 20:45:36.425415    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.990912  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:47 old-k8s-version-959748 kubelet[1382]: E0918 20:45:47.404058    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.991112  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:50 old-k8s-version-959748 kubelet[1382]: E0918 20:45:50.420469    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.991309  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:01 old-k8s-version-959748 kubelet[1382]: E0918 20:46:01.422276    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.991505  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:02 old-k8s-version-959748 kubelet[1382]: E0918 20:46:02.406518    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.991685  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:14 old-k8s-version-959748 kubelet[1382]: E0918 20:46:14.403352    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.993966  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:14 old-k8s-version-959748 kubelet[1382]: E0918 20:46:14.981455    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:36.994166  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:26 old-k8s-version-959748 kubelet[1382]: E0918 20:46:26.405462    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.994354  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:28 old-k8s-version-959748 kubelet[1382]: E0918 20:46:28.402770    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.994551  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:37 old-k8s-version-959748 kubelet[1382]: E0918 20:46:37.433602    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.994737  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:41 old-k8s-version-959748 kubelet[1382]: E0918 20:46:41.402626    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.994934  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:48 old-k8s-version-959748 kubelet[1382]: E0918 20:46:48.402441    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.995119  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:55 old-k8s-version-959748 kubelet[1382]: E0918 20:46:55.402485    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.995329  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:01 old-k8s-version-959748 kubelet[1382]: E0918 20:47:01.402464    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.997396  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:10 old-k8s-version-959748 kubelet[1382]: E0918 20:47:10.427537    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:36.997603  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:14 old-k8s-version-959748 kubelet[1382]: E0918 20:47:14.402961    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.997787  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:23 old-k8s-version-959748 kubelet[1382]: E0918 20:47:23.402466    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.997985  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:25 old-k8s-version-959748 kubelet[1382]: E0918 20:47:25.402567    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.998171  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:35 old-k8s-version-959748 kubelet[1382]: E0918 20:47:35.404622    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.000420  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:41 old-k8s-version-959748 kubelet[1382]: E0918 20:47:41.114133    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:37.000608  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:50 old-k8s-version-959748 kubelet[1382]: E0918 20:47:50.403588    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.000808  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:54 old-k8s-version-959748 kubelet[1382]: E0918 20:47:54.411522    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.000993  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:01 old-k8s-version-959748 kubelet[1382]: E0918 20:48:01.402535    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.001191  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:08 old-k8s-version-959748 kubelet[1382]: E0918 20:48:08.403570    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.001375  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:14 old-k8s-version-959748 kubelet[1382]: E0918 20:48:14.405384    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.001572  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:21 old-k8s-version-959748 kubelet[1382]: E0918 20:48:21.402696    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.001756  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:28 old-k8s-version-959748 kubelet[1382]: E0918 20:48:28.405928    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.001962  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:34 old-k8s-version-959748 kubelet[1382]: E0918 20:48:34.402538    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.002151  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:39 old-k8s-version-959748 kubelet[1382]: E0918 20:48:39.402531    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.002347  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:46 old-k8s-version-959748 kubelet[1382]: E0918 20:48:46.412489    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.002531  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:54 old-k8s-version-959748 kubelet[1382]: E0918 20:48:54.402285    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.002728  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:58 old-k8s-version-959748 kubelet[1382]: E0918 20:48:58.405409    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.002915  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:05 old-k8s-version-959748 kubelet[1382]: E0918 20:49:05.405649    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.003112  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:11 old-k8s-version-959748 kubelet[1382]: E0918 20:49:11.402291    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.003302  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:16 old-k8s-version-959748 kubelet[1382]: E0918 20:49:16.402969    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.003507  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:24 old-k8s-version-959748 kubelet[1382]: E0918 20:49:24.404436    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.003701  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:31 old-k8s-version-959748 kubelet[1382]: E0918 20:49:31.403134    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 20:49:37.003711  321053 logs.go:123] Gathering logs for kube-apiserver [93890fa25a1b] ...
	I0918 20:49:37.003725  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93890fa25a1b"
	I0918 20:49:37.113253  321053 logs.go:123] Gathering logs for kube-apiserver [42d259a39ced] ...
	I0918 20:49:37.113357  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d259a39ced"
	I0918 20:49:37.263513  321053 logs.go:123] Gathering logs for etcd [468c9a428546] ...
	I0918 20:49:37.263551  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c9a428546"
	I0918 20:49:37.292229  321053 logs.go:123] Gathering logs for etcd [ee3fb21586d8] ...
	I0918 20:49:37.292266  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee3fb21586d8"
	I0918 20:49:37.336122  321053 logs.go:123] Gathering logs for kubernetes-dashboard [57496d116106] ...
	I0918 20:49:37.336154  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57496d116106"
	I0918 20:49:37.360987  321053 logs.go:123] Gathering logs for coredns [c469388f2e86] ...
	I0918 20:49:37.361019  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c469388f2e86"
	I0918 20:49:37.387321  321053 logs.go:123] Gathering logs for coredns [22f71e25c69d] ...
	I0918 20:49:37.387360  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22f71e25c69d"
	I0918 20:49:37.425588  321053 logs.go:123] Gathering logs for kube-proxy [2906bf503bae] ...
	I0918 20:49:37.425617  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2906bf503bae"
	I0918 20:49:37.463486  321053 logs.go:123] Gathering logs for kube-controller-manager [d5f549195fea] ...
	I0918 20:49:37.463532  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f549195fea"
	I0918 20:49:37.537929  321053 logs.go:123] Gathering logs for storage-provisioner [ebd705b772fc] ...
	I0918 20:49:37.537969  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd705b772fc"
	I0918 20:49:37.575995  321053 logs.go:123] Gathering logs for describe nodes ...
	I0918 20:49:37.576022  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 20:49:37.842401  321053 logs.go:123] Gathering logs for kube-scheduler [0b5c2c549d85] ...
	I0918 20:49:37.842436  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5c2c549d85"
	I0918 20:49:37.893343  321053 logs.go:123] Gathering logs for kube-scheduler [d7d92bf388a8] ...
	I0918 20:49:37.893378  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d92bf388a8"
	I0918 20:49:37.947686  321053 logs.go:123] Gathering logs for kube-proxy [576f1c60a0bf] ...
	I0918 20:49:37.947727  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 576f1c60a0bf"
	I0918 20:49:37.979029  321053 logs.go:123] Gathering logs for container status ...
	I0918 20:49:37.979057  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 20:49:38.132771  321053 logs.go:123] Gathering logs for dmesg ...
	I0918 20:49:38.132820  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 20:49:38.180316  321053 logs.go:123] Gathering logs for kube-controller-manager [04f0f084f259] ...
	I0918 20:49:38.180356  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f0f084f259"
	I0918 20:49:38.301788  321053 logs.go:123] Gathering logs for storage-provisioner [d98463859c48] ...
	I0918 20:49:38.301864  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d98463859c48"
	I0918 20:49:38.353844  321053 logs.go:123] Gathering logs for Docker ...
	I0918 20:49:38.353872  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 20:49:38.418488  321053 out.go:358] Setting ErrFile to fd 2...
	I0918 20:49:38.418527  321053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0918 20:49:38.419252  321053 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0918 20:49:38.419270  321053 out.go:270]   Sep 18 20:49:05 old-k8s-version-959748 kubelet[1382]: E0918 20:49:05.405649    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 20:49:05 old-k8s-version-959748 kubelet[1382]: E0918 20:49:05.405649    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:38.419277  321053 out.go:270]   Sep 18 20:49:11 old-k8s-version-959748 kubelet[1382]: E0918 20:49:11.402291    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 18 20:49:11 old-k8s-version-959748 kubelet[1382]: E0918 20:49:11.402291    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:38.419297  321053 out.go:270]   Sep 18 20:49:16 old-k8s-version-959748 kubelet[1382]: E0918 20:49:16.402969    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 20:49:16 old-k8s-version-959748 kubelet[1382]: E0918 20:49:16.402969    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:38.419304  321053 out.go:270]   Sep 18 20:49:24 old-k8s-version-959748 kubelet[1382]: E0918 20:49:24.404436    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 18 20:49:24 old-k8s-version-959748 kubelet[1382]: E0918 20:49:24.404436    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:38.419463  321053 out.go:270]   Sep 18 20:49:31 old-k8s-version-959748 kubelet[1382]: E0918 20:49:31.403134    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 20:49:31 old-k8s-version-959748 kubelet[1382]: E0918 20:49:31.403134    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 20:49:38.419500  321053 out.go:358] Setting ErrFile to fd 2...
	I0918 20:49:38.419514  321053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:49:48.421610  321053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:49:48.437718  321053 api_server.go:72] duration metric: took 5m49.267006218s to wait for apiserver process to appear ...
	I0918 20:49:48.437743  321053 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:49:48.437839  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 20:49:48.461088  321053 logs.go:276] 2 containers: [93890fa25a1b 42d259a39ced]
	I0918 20:49:48.461168  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 20:49:48.500728  321053 logs.go:276] 2 containers: [468c9a428546 ee3fb21586d8]
	I0918 20:49:48.500810  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 20:49:48.543477  321053 logs.go:276] 2 containers: [c469388f2e86 22f71e25c69d]
	I0918 20:49:48.543560  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 20:49:48.571191  321053 logs.go:276] 2 containers: [0b5c2c549d85 d7d92bf388a8]
	I0918 20:49:48.571293  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 20:49:48.594937  321053 logs.go:276] 2 containers: [576f1c60a0bf 2906bf503bae]
	I0918 20:49:48.595034  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 20:49:48.624755  321053 logs.go:276] 2 containers: [d5f549195fea 04f0f084f259]
	I0918 20:49:48.624839  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 20:49:48.659558  321053 logs.go:276] 0 containers: []
	W0918 20:49:48.659580  321053 logs.go:278] No container was found matching "kindnet"
	I0918 20:49:48.659650  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0918 20:49:48.690180  321053 logs.go:276] 1 containers: [57496d116106]
	I0918 20:49:48.690328  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 20:49:48.725188  321053 logs.go:276] 2 containers: [d98463859c48 ebd705b772fc]
	I0918 20:49:48.725268  321053 logs.go:123] Gathering logs for kube-scheduler [0b5c2c549d85] ...
	I0918 20:49:48.725294  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5c2c549d85"
	I0918 20:49:48.778547  321053 logs.go:123] Gathering logs for kube-proxy [576f1c60a0bf] ...
	I0918 20:49:48.778626  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 576f1c60a0bf"
	I0918 20:49:48.808196  321053 logs.go:123] Gathering logs for kube-controller-manager [04f0f084f259] ...
	I0918 20:49:48.808277  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f0f084f259"
	I0918 20:49:48.871563  321053 logs.go:123] Gathering logs for etcd [468c9a428546] ...
	I0918 20:49:48.871642  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c9a428546"
	I0918 20:49:48.904695  321053 logs.go:123] Gathering logs for etcd [ee3fb21586d8] ...
	I0918 20:49:48.904769  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee3fb21586d8"
	I0918 20:49:48.940711  321053 logs.go:123] Gathering logs for Docker ...
	I0918 20:49:48.940787  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 20:49:48.968858  321053 logs.go:123] Gathering logs for kube-proxy [2906bf503bae] ...
	I0918 20:49:48.968935  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2906bf503bae"
	I0918 20:49:48.991645  321053 logs.go:123] Gathering logs for storage-provisioner [ebd705b772fc] ...
	I0918 20:49:48.991670  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd705b772fc"
	I0918 20:49:49.013606  321053 logs.go:123] Gathering logs for describe nodes ...
	I0918 20:49:49.013631  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 20:49:49.190708  321053 logs.go:123] Gathering logs for kube-apiserver [42d259a39ced] ...
	I0918 20:49:49.190799  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d259a39ced"
	I0918 20:49:49.263333  321053 logs.go:123] Gathering logs for coredns [c469388f2e86] ...
	I0918 20:49:49.263420  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c469388f2e86"
	I0918 20:49:49.292225  321053 logs.go:123] Gathering logs for kube-controller-manager [d5f549195fea] ...
	I0918 20:49:49.292252  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f549195fea"
	I0918 20:49:49.339748  321053 logs.go:123] Gathering logs for kubelet ...
	I0918 20:49:49.339818  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 20:49:49.419853  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:13 old-k8s-version-959748 kubelet[1382]: E0918 20:44:13.544419    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:49.421347  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:13 old-k8s-version-959748 kubelet[1382]: E0918 20:44:13.821194    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.422021  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:14 old-k8s-version-959748 kubelet[1382]: E0918 20:44:14.888406    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.424781  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:26 old-k8s-version-959748 kubelet[1382]: E0918 20:44:26.455937    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:49.432054  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:37 old-k8s-version-959748 kubelet[1382]: E0918 20:44:37.301443    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:49.432317  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:38 old-k8s-version-959748 kubelet[1382]: E0918 20:44:38.321604    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.432524  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:40 old-k8s-version-959748 kubelet[1382]: E0918 20:44:40.410130    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.433318  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:44 old-k8s-version-959748 kubelet[1382]: E0918 20:44:44.392000    1382 pod_workers.go:191] Error syncing pod 773bb59a-fa3d-4310-a265-018dd10517a1 ("storage-provisioner_kube-system(773bb59a-fa3d-4310-a265-018dd10517a1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(773bb59a-fa3d-4310-a265-018dd10517a1)"
	W0918 20:49:49.437488  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:51 old-k8s-version-959748 kubelet[1382]: E0918 20:44:51.428423    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:49.441416  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:53 old-k8s-version-959748 kubelet[1382]: E0918 20:44:53.973975    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:49.441777  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:06 old-k8s-version-959748 kubelet[1382]: E0918 20:45:06.402554    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.441999  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:09 old-k8s-version-959748 kubelet[1382]: E0918 20:45:09.412890    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.442203  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:21 old-k8s-version-959748 kubelet[1382]: E0918 20:45:21.403236    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.444464  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:24 old-k8s-version-959748 kubelet[1382]: E0918 20:45:24.993892    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:49.446545  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:35 old-k8s-version-959748 kubelet[1382]: E0918 20:45:35.427494    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:49.446765  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:36 old-k8s-version-959748 kubelet[1382]: E0918 20:45:36.425415    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.446979  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:47 old-k8s-version-959748 kubelet[1382]: E0918 20:45:47.404058    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.447198  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:50 old-k8s-version-959748 kubelet[1382]: E0918 20:45:50.420469    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.447418  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:01 old-k8s-version-959748 kubelet[1382]: E0918 20:46:01.422276    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.447639  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:02 old-k8s-version-959748 kubelet[1382]: E0918 20:46:02.406518    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.447845  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:14 old-k8s-version-959748 kubelet[1382]: E0918 20:46:14.403352    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.452525  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:14 old-k8s-version-959748 kubelet[1382]: E0918 20:46:14.981455    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:49.452757  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:26 old-k8s-version-959748 kubelet[1382]: E0918 20:46:26.405462    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.452965  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:28 old-k8s-version-959748 kubelet[1382]: E0918 20:46:28.402770    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.453193  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:37 old-k8s-version-959748 kubelet[1382]: E0918 20:46:37.433602    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.453398  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:41 old-k8s-version-959748 kubelet[1382]: E0918 20:46:41.402626    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.453615  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:48 old-k8s-version-959748 kubelet[1382]: E0918 20:46:48.402441    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.453823  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:55 old-k8s-version-959748 kubelet[1382]: E0918 20:46:55.402485    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.454038  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:01 old-k8s-version-959748 kubelet[1382]: E0918 20:47:01.402464    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.456137  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:10 old-k8s-version-959748 kubelet[1382]: E0918 20:47:10.427537    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:49.456363  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:14 old-k8s-version-959748 kubelet[1382]: E0918 20:47:14.402961    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.456573  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:23 old-k8s-version-959748 kubelet[1382]: E0918 20:47:23.402466    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.456807  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:25 old-k8s-version-959748 kubelet[1382]: E0918 20:47:25.402567    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.457010  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:35 old-k8s-version-959748 kubelet[1382]: E0918 20:47:35.404622    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.459270  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:41 old-k8s-version-959748 kubelet[1382]: E0918 20:47:41.114133    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:49.459485  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:50 old-k8s-version-959748 kubelet[1382]: E0918 20:47:50.403588    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.459700  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:54 old-k8s-version-959748 kubelet[1382]: E0918 20:47:54.411522    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.463469  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:01 old-k8s-version-959748 kubelet[1382]: E0918 20:48:01.402535    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.463695  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:08 old-k8s-version-959748 kubelet[1382]: E0918 20:48:08.403570    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.463899  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:14 old-k8s-version-959748 kubelet[1382]: E0918 20:48:14.405384    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.464110  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:21 old-k8s-version-959748 kubelet[1382]: E0918 20:48:21.402696    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.464318  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:28 old-k8s-version-959748 kubelet[1382]: E0918 20:48:28.405928    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.464536  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:34 old-k8s-version-959748 kubelet[1382]: E0918 20:48:34.402538    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.464746  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:39 old-k8s-version-959748 kubelet[1382]: E0918 20:48:39.402531    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.464978  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:46 old-k8s-version-959748 kubelet[1382]: E0918 20:48:46.412489    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.465164  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:54 old-k8s-version-959748 kubelet[1382]: E0918 20:48:54.402285    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.465357  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:58 old-k8s-version-959748 kubelet[1382]: E0918 20:48:58.405409    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.465537  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:05 old-k8s-version-959748 kubelet[1382]: E0918 20:49:05.405649    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.465734  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:11 old-k8s-version-959748 kubelet[1382]: E0918 20:49:11.402291    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.465916  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:16 old-k8s-version-959748 kubelet[1382]: E0918 20:49:16.402969    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.466108  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:24 old-k8s-version-959748 kubelet[1382]: E0918 20:49:24.404436    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.466289  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:31 old-k8s-version-959748 kubelet[1382]: E0918 20:49:31.403134    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.466481  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:37 old-k8s-version-959748 kubelet[1382]: E0918 20:49:37.402868    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.466665  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:45 old-k8s-version-959748 kubelet[1382]: E0918 20:49:45.406022    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 20:49:49.466672  321053 logs.go:123] Gathering logs for dmesg ...
	I0918 20:49:49.466688  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 20:49:49.507155  321053 logs.go:123] Gathering logs for kube-scheduler [d7d92bf388a8] ...
	I0918 20:49:49.507181  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d92bf388a8"
	I0918 20:49:49.532579  321053 logs.go:123] Gathering logs for kubernetes-dashboard [57496d116106] ...
	I0918 20:49:49.532650  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57496d116106"
	I0918 20:49:49.565575  321053 logs.go:123] Gathering logs for storage-provisioner [d98463859c48] ...
	I0918 20:49:49.565644  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d98463859c48"
	I0918 20:49:49.603377  321053 logs.go:123] Gathering logs for container status ...
	I0918 20:49:49.603453  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 20:49:49.682568  321053 logs.go:123] Gathering logs for kube-apiserver [93890fa25a1b] ...
	I0918 20:49:49.682646  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93890fa25a1b"
	I0918 20:49:49.758539  321053 logs.go:123] Gathering logs for coredns [22f71e25c69d] ...
	I0918 20:49:49.758609  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22f71e25c69d"
	I0918 20:49:49.793203  321053 out.go:358] Setting ErrFile to fd 2...
	I0918 20:49:49.793261  321053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0918 20:49:49.793327  321053 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0918 20:49:49.793381  321053 out.go:270]   Sep 18 20:49:16 old-k8s-version-959748 kubelet[1382]: E0918 20:49:16.402969    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 20:49:16 old-k8s-version-959748 kubelet[1382]: E0918 20:49:16.402969    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.793455  321053 out.go:270]   Sep 18 20:49:24 old-k8s-version-959748 kubelet[1382]: E0918 20:49:24.404436    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 18 20:49:24 old-k8s-version-959748 kubelet[1382]: E0918 20:49:24.404436    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.793501  321053 out.go:270]   Sep 18 20:49:31 old-k8s-version-959748 kubelet[1382]: E0918 20:49:31.403134    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 20:49:31 old-k8s-version-959748 kubelet[1382]: E0918 20:49:31.403134    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.793542  321053 out.go:270]   Sep 18 20:49:37 old-k8s-version-959748 kubelet[1382]: E0918 20:49:37.402868    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  Sep 18 20:49:37 old-k8s-version-959748 kubelet[1382]: E0918 20:49:37.402868    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.793575  321053 out.go:270]   Sep 18 20:49:45 old-k8s-version-959748 kubelet[1382]: E0918 20:49:45.406022    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 18 20:49:45 old-k8s-version-959748 kubelet[1382]: E0918 20:49:45.406022    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 20:49:49.793616  321053 out.go:358] Setting ErrFile to fd 2...
	I0918 20:49:49.793638  321053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:49:59.794427  321053 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0918 20:49:59.807734  321053 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0918 20:49:59.811016  321053 out.go:201] 
	W0918 20:49:59.815069  321053 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0918 20:49:59.815113  321053 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0918 20:49:59.815138  321053 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0918 20:49:59.815147  321053 out.go:270] * 
	* 
	W0918 20:49:59.816029  321053 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 20:49:59.817787  321053 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-959748 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-959748
helpers_test.go:235: (dbg) docker inspect old-k8s-version-959748:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d98c59975a8a8a8bb1c9e4db3f10de950f55a1bab59f51b95a97f6a82ed757a",
	        "Created": "2024-09-18T20:40:44.418927945Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321478,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-18T20:43:51.372587627Z",
	            "FinishedAt": "2024-09-18T20:43:49.926965502Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/5d98c59975a8a8a8bb1c9e4db3f10de950f55a1bab59f51b95a97f6a82ed757a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d98c59975a8a8a8bb1c9e4db3f10de950f55a1bab59f51b95a97f6a82ed757a/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d98c59975a8a8a8bb1c9e4db3f10de950f55a1bab59f51b95a97f6a82ed757a/hosts",
	        "LogPath": "/var/lib/docker/containers/5d98c59975a8a8a8bb1c9e4db3f10de950f55a1bab59f51b95a97f6a82ed757a/5d98c59975a8a8a8bb1c9e4db3f10de950f55a1bab59f51b95a97f6a82ed757a-json.log",
	        "Name": "/old-k8s-version-959748",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-959748:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-959748",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5c56a3d907596dd2614b571681de21cafa00a035ed7f82cf2aa94a0cff942712-init/diff:/var/lib/docker/overlay2/2d5f4db6bef4f73456b3d6729836bc99a064b2dff1ec273e613fe21fbf6cf84d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5c56a3d907596dd2614b571681de21cafa00a035ed7f82cf2aa94a0cff942712/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5c56a3d907596dd2614b571681de21cafa00a035ed7f82cf2aa94a0cff942712/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5c56a3d907596dd2614b571681de21cafa00a035ed7f82cf2aa94a0cff942712/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-959748",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-959748/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-959748",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-959748",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-959748",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c422c4610db6981bdd7d0613729fb852ffdd53c2f61cbba01385c8db2a9440e",
	            "SandboxKey": "/var/run/docker/netns/8c422c4610db",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-959748": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "52be7fa6c1020c367e2847a7c9b1e6316e22c4bd18dc30f3f37ecb8bb0bacfed",
	                    "EndpointID": "337387d932f59c4a05e8504da6be6ae8077081a4e781708d563d90243dcb90e6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-959748",
	                        "5d98c59975a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-959748 -n old-k8s-version-959748
helpers_test.go:239: (dbg) Done: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-959748 -n old-k8s-version-959748: (1.089795472s)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-959748 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-959748 logs -n 25: (2.4317795s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | docker-flags-948087 ssh                                | docker-flags-948087          | jenkins | v1.34.0 | 18 Sep 24 20:39 UTC | 18 Sep 24 20:39 UTC |
	|         | sudo systemctl show docker                             |                              |         |         |                     |                     |
	|         | --property=Environment                                 |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | docker-flags-948087 ssh                                | docker-flags-948087          | jenkins | v1.34.0 | 18 Sep 24 20:39 UTC | 18 Sep 24 20:39 UTC |
	|         | sudo systemctl show docker                             |                              |         |         |                     |                     |
	|         | --property=ExecStart                                   |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| delete  | -p docker-flags-948087                                 | docker-flags-948087          | jenkins | v1.34.0 | 18 Sep 24 20:39 UTC | 18 Sep 24 20:40 UTC |
	| start   | -p cert-options-886610                                 | cert-options-886610          | jenkins | v1.34.0 | 18 Sep 24 20:40 UTC | 18 Sep 24 20:40 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	| ssh     | cert-options-886610 ssh                                | cert-options-886610          | jenkins | v1.34.0 | 18 Sep 24 20:40 UTC | 18 Sep 24 20:40 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-886610 -- sudo                         | cert-options-886610          | jenkins | v1.34.0 | 18 Sep 24 20:40 UTC | 18 Sep 24 20:40 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-886610                                 | cert-options-886610          | jenkins | v1.34.0 | 18 Sep 24 20:40 UTC | 18 Sep 24 20:40 UTC |
	| start   | -p old-k8s-version-959748                              | old-k8s-version-959748       | jenkins | v1.34.0 | 18 Sep 24 20:40 UTC | 18 Sep 24 20:43 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-179155                              | cert-expiration-179155       | jenkins | v1.34.0 | 18 Sep 24 20:42 UTC | 18 Sep 24 20:43 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-179155                              | cert-expiration-179155       | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC | 18 Sep 24 20:43 UTC |
	| start   | -p                                                     | default-k8s-diff-port-689561 | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC | 18 Sep 24 20:44 UTC |
	|         | default-k8s-diff-port-689561                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-959748        | old-k8s-version-959748       | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC | 18 Sep 24 20:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-959748                              | old-k8s-version-959748       | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC | 18 Sep 24 20:43 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-959748             | old-k8s-version-959748       | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC | 18 Sep 24 20:43 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-959748                              | old-k8s-version-959748       | jenkins | v1.34.0 | 18 Sep 24 20:43 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-689561  | default-k8s-diff-port-689561 | jenkins | v1.34.0 | 18 Sep 24 20:44 UTC | 18 Sep 24 20:44 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-689561 | jenkins | v1.34.0 | 18 Sep 24 20:44 UTC | 18 Sep 24 20:44 UTC |
	|         | default-k8s-diff-port-689561                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-689561       | default-k8s-diff-port-689561 | jenkins | v1.34.0 | 18 Sep 24 20:44 UTC | 18 Sep 24 20:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-689561 | jenkins | v1.34.0 | 18 Sep 24 20:44 UTC | 18 Sep 24 20:49 UTC |
	|         | default-k8s-diff-port-689561                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-689561                           | default-k8s-diff-port-689561 | jenkins | v1.34.0 | 18 Sep 24 20:49 UTC | 18 Sep 24 20:49 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-689561 | jenkins | v1.34.0 | 18 Sep 24 20:49 UTC | 18 Sep 24 20:49 UTC |
	|         | default-k8s-diff-port-689561                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-689561 | jenkins | v1.34.0 | 18 Sep 24 20:49 UTC | 18 Sep 24 20:49 UTC |
	|         | default-k8s-diff-port-689561                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-689561 | jenkins | v1.34.0 | 18 Sep 24 20:49 UTC | 18 Sep 24 20:49 UTC |
	|         | default-k8s-diff-port-689561                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-689561 | jenkins | v1.34.0 | 18 Sep 24 20:49 UTC | 18 Sep 24 20:49 UTC |
	|         | default-k8s-diff-port-689561                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-845058                                  | embed-certs-845058           | jenkins | v1.34.0 | 18 Sep 24 20:49 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 20:49:32
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 20:49:32.381574  334582 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:49:32.381776  334582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:49:32.381789  334582 out.go:358] Setting ErrFile to fd 2...
	I0918 20:49:32.381795  334582 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:49:32.382074  334582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
	I0918 20:49:32.382583  334582 out.go:352] Setting JSON to false
	I0918 20:49:32.383813  334582 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5520,"bootTime":1726687053,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0918 20:49:32.383902  334582 start.go:139] virtualization:  
	I0918 20:49:32.386098  334582 out.go:177] * [embed-certs-845058] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0918 20:49:32.387832  334582 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 20:49:32.387951  334582 notify.go:220] Checking for updates...
	I0918 20:49:32.391996  334582 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 20:49:32.394289  334582 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	I0918 20:49:32.397110  334582 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	I0918 20:49:32.399206  334582 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 20:49:32.403418  334582 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 20:49:32.408915  334582 config.go:182] Loaded profile config "old-k8s-version-959748": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0918 20:49:32.409060  334582 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 20:49:32.440710  334582 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 20:49:32.440852  334582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:49:32.540405  334582 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-18 20:49:32.530094399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:49:32.540521  334582 docker.go:318] overlay module found
	I0918 20:49:32.543528  334582 out.go:177] * Using the docker driver based on user configuration
	I0918 20:49:32.546071  334582 start.go:297] selected driver: docker
	I0918 20:49:32.546099  334582 start.go:901] validating driver "docker" against <nil>
	I0918 20:49:32.546114  334582 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 20:49:32.546800  334582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:49:32.618931  334582 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-18 20:49:32.608774161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:49:32.619160  334582 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 20:49:32.619473  334582 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 20:49:32.621625  334582 out.go:177] * Using Docker driver with root privileges
	I0918 20:49:32.623822  334582 cni.go:84] Creating CNI manager for ""
	I0918 20:49:32.623906  334582 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 20:49:32.623923  334582 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 20:49:32.624002  334582 start.go:340] cluster config:
	{Name:embed-certs-845058 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-845058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:49:32.626523  334582 out.go:177] * Starting "embed-certs-845058" primary control-plane node in "embed-certs-845058" cluster
	I0918 20:49:32.628833  334582 cache.go:121] Beginning downloading kic base image for docker with docker
	I0918 20:49:32.631404  334582 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0918 20:49:32.633807  334582 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 20:49:32.633866  334582 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 20:49:32.633879  334582 cache.go:56] Caching tarball of preloaded images
	I0918 20:49:32.633898  334582 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0918 20:49:32.634066  334582 preload.go:172] Found /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0918 20:49:32.634079  334582 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 20:49:32.634191  334582 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/config.json ...
	I0918 20:49:32.634220  334582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/config.json: {Name:mk15ccdbaada4b66cb06f536f580687508c9996d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0918 20:49:32.670935  334582 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0918 20:49:32.670957  334582 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 20:49:32.671050  334582 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 20:49:32.671068  334582 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0918 20:49:32.671073  334582 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0918 20:49:32.671087  334582 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0918 20:49:32.671105  334582 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0918 20:49:32.804967  334582 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0918 20:49:32.805007  334582 cache.go:194] Successfully downloaded all kic artifacts
	I0918 20:49:32.805047  334582 start.go:360] acquireMachinesLock for embed-certs-845058: {Name:mk85b71a4a4bb7a0a4b4cad02670c4d592640bcc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 20:49:32.805183  334582 start.go:364] duration metric: took 112.58µs to acquireMachinesLock for "embed-certs-845058"
	I0918 20:49:32.805218  334582 start.go:93] Provisioning new machine with config: &{Name:embed-certs-845058 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-845058 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0918 20:49:32.805307  334582 start.go:125] createHost starting for "" (driver="docker")
	I0918 20:49:32.707913  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:34.709527  321053 pod_ready.go:103] pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace has status "Ready":"False"
	I0918 20:49:32.808449  334582 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0918 20:49:32.808779  334582 start.go:159] libmachine.API.Create for "embed-certs-845058" (driver="docker")
	I0918 20:49:32.808821  334582 client.go:168] LocalClient.Create starting
	I0918 20:49:32.808913  334582 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem
	I0918 20:49:32.808956  334582 main.go:141] libmachine: Decoding PEM data...
	I0918 20:49:32.808977  334582 main.go:141] libmachine: Parsing certificate...
	I0918 20:49:32.809034  334582 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem
	I0918 20:49:32.809056  334582 main.go:141] libmachine: Decoding PEM data...
	I0918 20:49:32.809071  334582 main.go:141] libmachine: Parsing certificate...
	I0918 20:49:32.809461  334582 cli_runner.go:164] Run: docker network inspect embed-certs-845058 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0918 20:49:32.826172  334582 cli_runner.go:211] docker network inspect embed-certs-845058 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0918 20:49:32.826257  334582 network_create.go:284] running [docker network inspect embed-certs-845058] to gather additional debugging logs...
	I0918 20:49:32.826281  334582 cli_runner.go:164] Run: docker network inspect embed-certs-845058
	W0918 20:49:32.842654  334582 cli_runner.go:211] docker network inspect embed-certs-845058 returned with exit code 1
	I0918 20:49:32.842692  334582 network_create.go:287] error running [docker network inspect embed-certs-845058]: docker network inspect embed-certs-845058: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-845058 not found
	I0918 20:49:32.842705  334582 network_create.go:289] output of [docker network inspect embed-certs-845058]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-845058 not found
	
	** /stderr **
	I0918 20:49:32.842816  334582 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 20:49:32.862157  334582 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3c97df0c2a48 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:1c:0c:98:e2} reservation:<nil>}
	I0918 20:49:32.862589  334582 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fd089a803963 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:18:b1:7d:5c} reservation:<nil>}
	I0918 20:49:32.863052  334582 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ae84277cfbb5 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d0:95:02:11} reservation:<nil>}
	I0918 20:49:32.863456  334582 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-52be7fa6c102 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:bf:77:d7:ab} reservation:<nil>}
	I0918 20:49:32.864011  334582 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018fc7e0}
	I0918 20:49:32.864072  334582 network_create.go:124] attempt to create docker network embed-certs-845058 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0918 20:49:32.864174  334582 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-845058 embed-certs-845058
	I0918 20:49:32.945459  334582 network_create.go:108] docker network embed-certs-845058 192.168.85.0/24 created
	I0918 20:49:32.945493  334582 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-845058" container
	I0918 20:49:32.945574  334582 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0918 20:49:32.961722  334582 cli_runner.go:164] Run: docker volume create embed-certs-845058 --label name.minikube.sigs.k8s.io=embed-certs-845058 --label created_by.minikube.sigs.k8s.io=true
	I0918 20:49:32.978159  334582 oci.go:103] Successfully created a docker volume embed-certs-845058
	I0918 20:49:32.978260  334582 cli_runner.go:164] Run: docker run --rm --name embed-certs-845058-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-845058 --entrypoint /usr/bin/test -v embed-certs-845058:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0918 20:49:33.628909  334582 oci.go:107] Successfully prepared a docker volume embed-certs-845058
	I0918 20:49:33.628965  334582 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 20:49:33.628985  334582 kic.go:194] Starting extracting preloaded images to volume ...
	I0918 20:49:33.629056  334582 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-845058:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0918 20:49:36.705765  321053 pod_ready.go:82] duration metric: took 4m0.005676003s for pod "metrics-server-9975d5f86-jxd9z" in "kube-system" namespace to be "Ready" ...
	E0918 20:49:36.705795  321053 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0918 20:49:36.705806  321053 pod_ready.go:39] duration metric: took 5m25.394785707s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 20:49:36.705826  321053 api_server.go:52] waiting for apiserver process to appear ...
	I0918 20:49:36.705912  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 20:49:36.734397  321053 logs.go:276] 2 containers: [93890fa25a1b 42d259a39ced]
	I0918 20:49:36.734548  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 20:49:36.755622  321053 logs.go:276] 2 containers: [468c9a428546 ee3fb21586d8]
	I0918 20:49:36.755748  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 20:49:36.777370  321053 logs.go:276] 2 containers: [c469388f2e86 22f71e25c69d]
	I0918 20:49:36.777465  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 20:49:36.796457  321053 logs.go:276] 2 containers: [0b5c2c549d85 d7d92bf388a8]
	I0918 20:49:36.796561  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 20:49:36.818276  321053 logs.go:276] 2 containers: [576f1c60a0bf 2906bf503bae]
	I0918 20:49:36.818413  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 20:49:36.838974  321053 logs.go:276] 2 containers: [d5f549195fea 04f0f084f259]
	I0918 20:49:36.839110  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 20:49:36.873307  321053 logs.go:276] 0 containers: []
	W0918 20:49:36.873379  321053 logs.go:278] No container was found matching "kindnet"
	I0918 20:49:36.873471  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0918 20:49:36.891877  321053 logs.go:276] 1 containers: [57496d116106]
	I0918 20:49:36.891981  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 20:49:36.910982  321053 logs.go:276] 2 containers: [d98463859c48 ebd705b772fc]
	I0918 20:49:36.911014  321053 logs.go:123] Gathering logs for kubelet ...
	I0918 20:49:36.911025  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 20:49:36.969958  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:13 old-k8s-version-959748 kubelet[1382]: E0918 20:44:13.544419    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:36.971458  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:13 old-k8s-version-959748 kubelet[1382]: E0918 20:44:13.821194    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.972124  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:14 old-k8s-version-959748 kubelet[1382]: E0918 20:44:14.888406    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.974834  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:26 old-k8s-version-959748 kubelet[1382]: E0918 20:44:26.455937    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:36.979570  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:37 old-k8s-version-959748 kubelet[1382]: E0918 20:44:37.301443    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:36.979778  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:38 old-k8s-version-959748 kubelet[1382]: E0918 20:44:38.321604    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.979963  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:40 old-k8s-version-959748 kubelet[1382]: E0918 20:44:40.410130    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.980740  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:44 old-k8s-version-959748 kubelet[1382]: E0918 20:44:44.392000    1382 pod_workers.go:191] Error syncing pod 773bb59a-fa3d-4310-a265-018dd10517a1 ("storage-provisioner_kube-system(773bb59a-fa3d-4310-a265-018dd10517a1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(773bb59a-fa3d-4310-a265-018dd10517a1)"
	W0918 20:49:36.982823  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:51 old-k8s-version-959748 kubelet[1382]: E0918 20:44:51.428423    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:36.985433  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:53 old-k8s-version-959748 kubelet[1382]: E0918 20:44:53.973975    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:36.985774  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:06 old-k8s-version-959748 kubelet[1382]: E0918 20:45:06.402554    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.985975  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:09 old-k8s-version-959748 kubelet[1382]: E0918 20:45:09.412890    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.986160  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:21 old-k8s-version-959748 kubelet[1382]: E0918 20:45:21.403236    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.988430  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:24 old-k8s-version-959748 kubelet[1382]: E0918 20:45:24.993892    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:36.990524  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:35 old-k8s-version-959748 kubelet[1382]: E0918 20:45:35.427494    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:36.990727  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:36 old-k8s-version-959748 kubelet[1382]: E0918 20:45:36.425415    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.990912  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:47 old-k8s-version-959748 kubelet[1382]: E0918 20:45:47.404058    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.991112  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:50 old-k8s-version-959748 kubelet[1382]: E0918 20:45:50.420469    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.991309  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:01 old-k8s-version-959748 kubelet[1382]: E0918 20:46:01.422276    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.991505  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:02 old-k8s-version-959748 kubelet[1382]: E0918 20:46:02.406518    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.991685  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:14 old-k8s-version-959748 kubelet[1382]: E0918 20:46:14.403352    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.993966  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:14 old-k8s-version-959748 kubelet[1382]: E0918 20:46:14.981455    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:36.994166  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:26 old-k8s-version-959748 kubelet[1382]: E0918 20:46:26.405462    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.994354  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:28 old-k8s-version-959748 kubelet[1382]: E0918 20:46:28.402770    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.994551  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:37 old-k8s-version-959748 kubelet[1382]: E0918 20:46:37.433602    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.994737  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:41 old-k8s-version-959748 kubelet[1382]: E0918 20:46:41.402626    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.994934  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:48 old-k8s-version-959748 kubelet[1382]: E0918 20:46:48.402441    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.995119  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:55 old-k8s-version-959748 kubelet[1382]: E0918 20:46:55.402485    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.995329  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:01 old-k8s-version-959748 kubelet[1382]: E0918 20:47:01.402464    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.997396  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:10 old-k8s-version-959748 kubelet[1382]: E0918 20:47:10.427537    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:36.997603  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:14 old-k8s-version-959748 kubelet[1382]: E0918 20:47:14.402961    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.997787  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:23 old-k8s-version-959748 kubelet[1382]: E0918 20:47:23.402466    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.997985  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:25 old-k8s-version-959748 kubelet[1382]: E0918 20:47:25.402567    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:36.998171  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:35 old-k8s-version-959748 kubelet[1382]: E0918 20:47:35.404622    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.000420  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:41 old-k8s-version-959748 kubelet[1382]: E0918 20:47:41.114133    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:37.000608  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:50 old-k8s-version-959748 kubelet[1382]: E0918 20:47:50.403588    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.000808  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:54 old-k8s-version-959748 kubelet[1382]: E0918 20:47:54.411522    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.000993  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:01 old-k8s-version-959748 kubelet[1382]: E0918 20:48:01.402535    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.001191  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:08 old-k8s-version-959748 kubelet[1382]: E0918 20:48:08.403570    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.001375  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:14 old-k8s-version-959748 kubelet[1382]: E0918 20:48:14.405384    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.001572  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:21 old-k8s-version-959748 kubelet[1382]: E0918 20:48:21.402696    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.001756  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:28 old-k8s-version-959748 kubelet[1382]: E0918 20:48:28.405928    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.001962  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:34 old-k8s-version-959748 kubelet[1382]: E0918 20:48:34.402538    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.002151  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:39 old-k8s-version-959748 kubelet[1382]: E0918 20:48:39.402531    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.002347  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:46 old-k8s-version-959748 kubelet[1382]: E0918 20:48:46.412489    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.002531  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:54 old-k8s-version-959748 kubelet[1382]: E0918 20:48:54.402285    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.002728  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:58 old-k8s-version-959748 kubelet[1382]: E0918 20:48:58.405409    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.002915  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:05 old-k8s-version-959748 kubelet[1382]: E0918 20:49:05.405649    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.003112  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:11 old-k8s-version-959748 kubelet[1382]: E0918 20:49:11.402291    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.003302  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:16 old-k8s-version-959748 kubelet[1382]: E0918 20:49:16.402969    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.003507  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:24 old-k8s-version-959748 kubelet[1382]: E0918 20:49:24.404436    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:37.003701  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:31 old-k8s-version-959748 kubelet[1382]: E0918 20:49:31.403134    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 20:49:37.003711  321053 logs.go:123] Gathering logs for kube-apiserver [93890fa25a1b] ...
	I0918 20:49:37.003725  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93890fa25a1b"
	I0918 20:49:37.113253  321053 logs.go:123] Gathering logs for kube-apiserver [42d259a39ced] ...
	I0918 20:49:37.113357  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d259a39ced"
	I0918 20:49:37.263513  321053 logs.go:123] Gathering logs for etcd [468c9a428546] ...
	I0918 20:49:37.263551  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c9a428546"
	I0918 20:49:37.292229  321053 logs.go:123] Gathering logs for etcd [ee3fb21586d8] ...
	I0918 20:49:37.292266  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee3fb21586d8"
	I0918 20:49:37.336122  321053 logs.go:123] Gathering logs for kubernetes-dashboard [57496d116106] ...
	I0918 20:49:37.336154  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57496d116106"
	I0918 20:49:37.360987  321053 logs.go:123] Gathering logs for coredns [c469388f2e86] ...
	I0918 20:49:37.361019  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c469388f2e86"
	I0918 20:49:37.387321  321053 logs.go:123] Gathering logs for coredns [22f71e25c69d] ...
	I0918 20:49:37.387360  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22f71e25c69d"
	I0918 20:49:37.425588  321053 logs.go:123] Gathering logs for kube-proxy [2906bf503bae] ...
	I0918 20:49:37.425617  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2906bf503bae"
	I0918 20:49:37.463486  321053 logs.go:123] Gathering logs for kube-controller-manager [d5f549195fea] ...
	I0918 20:49:37.463532  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f549195fea"
	I0918 20:49:37.537929  321053 logs.go:123] Gathering logs for storage-provisioner [ebd705b772fc] ...
	I0918 20:49:37.537969  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd705b772fc"
	I0918 20:49:37.575995  321053 logs.go:123] Gathering logs for describe nodes ...
	I0918 20:49:37.576022  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 20:49:37.842401  321053 logs.go:123] Gathering logs for kube-scheduler [0b5c2c549d85] ...
	I0918 20:49:37.842436  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5c2c549d85"
	I0918 20:49:37.893343  321053 logs.go:123] Gathering logs for kube-scheduler [d7d92bf388a8] ...
	I0918 20:49:37.893378  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d92bf388a8"
	I0918 20:49:37.947686  321053 logs.go:123] Gathering logs for kube-proxy [576f1c60a0bf] ...
	I0918 20:49:37.947727  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 576f1c60a0bf"
	I0918 20:49:37.979029  321053 logs.go:123] Gathering logs for container status ...
	I0918 20:49:37.979057  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 20:49:38.132771  321053 logs.go:123] Gathering logs for dmesg ...
	I0918 20:49:38.132820  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 20:49:38.180316  321053 logs.go:123] Gathering logs for kube-controller-manager [04f0f084f259] ...
	I0918 20:49:38.180356  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f0f084f259"
	I0918 20:49:38.301788  321053 logs.go:123] Gathering logs for storage-provisioner [d98463859c48] ...
	I0918 20:49:38.301864  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d98463859c48"
	I0918 20:49:38.353844  321053 logs.go:123] Gathering logs for Docker ...
	I0918 20:49:38.353872  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 20:49:38.418488  321053 out.go:358] Setting ErrFile to fd 2...
	I0918 20:49:38.418527  321053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0918 20:49:38.419252  321053 out.go:270] X Problems detected in kubelet:
	W0918 20:49:38.419270  321053 out.go:270]   Sep 18 20:49:05 old-k8s-version-959748 kubelet[1382]: E0918 20:49:05.405649    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:38.419277  321053 out.go:270]   Sep 18 20:49:11 old-k8s-version-959748 kubelet[1382]: E0918 20:49:11.402291    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:38.419297  321053 out.go:270]   Sep 18 20:49:16 old-k8s-version-959748 kubelet[1382]: E0918 20:49:16.402969    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:38.419304  321053 out.go:270]   Sep 18 20:49:24 old-k8s-version-959748 kubelet[1382]: E0918 20:49:24.404436    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:38.419463  321053 out.go:270]   Sep 18 20:49:31 old-k8s-version-959748 kubelet[1382]: E0918 20:49:31.403134    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 20:49:38.419500  321053 out.go:358] Setting ErrFile to fd 2...
	I0918 20:49:38.419514  321053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:49:37.575148  334582 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-845058:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.946054545s)
	I0918 20:49:37.575178  334582 kic.go:203] duration metric: took 3.946189697s to extract preloaded images to volume ...
	W0918 20:49:37.575372  334582 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0918 20:49:37.575486  334582 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0918 20:49:37.651291  334582 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-845058 --name embed-certs-845058 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-845058 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-845058 --network embed-certs-845058 --ip 192.168.85.2 --volume embed-certs-845058:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0918 20:49:38.040179  334582 cli_runner.go:164] Run: docker container inspect embed-certs-845058 --format={{.State.Running}}
	I0918 20:49:38.066294  334582 cli_runner.go:164] Run: docker container inspect embed-certs-845058 --format={{.State.Status}}
	I0918 20:49:38.110446  334582 cli_runner.go:164] Run: docker exec embed-certs-845058 stat /var/lib/dpkg/alternatives/iptables
	I0918 20:49:38.224526  334582 oci.go:144] the created container "embed-certs-845058" has a running status.
	I0918 20:49:38.224554  334582 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19667-2236/.minikube/machines/embed-certs-845058/id_rsa...
	I0918 20:49:39.203728  334582 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19667-2236/.minikube/machines/embed-certs-845058/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0918 20:49:39.227493  334582 cli_runner.go:164] Run: docker container inspect embed-certs-845058 --format={{.State.Status}}
	I0918 20:49:39.249825  334582 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0918 20:49:39.249844  334582 kic_runner.go:114] Args: [docker exec --privileged embed-certs-845058 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0918 20:49:39.316961  334582 cli_runner.go:164] Run: docker container inspect embed-certs-845058 --format={{.State.Status}}
	I0918 20:49:39.346726  334582 machine.go:93] provisionDockerMachine start ...
	I0918 20:49:39.346905  334582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845058
	I0918 20:49:39.369154  334582 main.go:141] libmachine: Using SSH client type: native
	I0918 20:49:39.369491  334582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I0918 20:49:39.369510  334582 main.go:141] libmachine: About to run SSH command:
	hostname
	I0918 20:49:39.526747  334582 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-845058
	
	I0918 20:49:39.526771  334582 ubuntu.go:169] provisioning hostname "embed-certs-845058"
	I0918 20:49:39.526837  334582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845058
	I0918 20:49:39.544701  334582 main.go:141] libmachine: Using SSH client type: native
	I0918 20:49:39.544950  334582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I0918 20:49:39.544969  334582 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-845058 && echo "embed-certs-845058" | sudo tee /etc/hostname
	I0918 20:49:39.704903  334582 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-845058
	
	I0918 20:49:39.704989  334582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845058
	I0918 20:49:39.724407  334582 main.go:141] libmachine: Using SSH client type: native
	I0918 20:49:39.724662  334582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I0918 20:49:39.724705  334582 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-845058' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-845058/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-845058' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 20:49:39.875692  334582 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 20:49:39.875725  334582 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19667-2236/.minikube CaCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19667-2236/.minikube}
	I0918 20:49:39.875767  334582 ubuntu.go:177] setting up certificates
	I0918 20:49:39.875779  334582 provision.go:84] configureAuth start
	I0918 20:49:39.875849  334582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-845058
	I0918 20:49:39.893502  334582 provision.go:143] copyHostCerts
	I0918 20:49:39.893575  334582 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-2236/.minikube/ca.pem, removing ...
	I0918 20:49:39.893588  334582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-2236/.minikube/ca.pem
	I0918 20:49:39.893670  334582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/ca.pem (1078 bytes)
	I0918 20:49:39.893767  334582 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-2236/.minikube/cert.pem, removing ...
	I0918 20:49:39.893777  334582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-2236/.minikube/cert.pem
	I0918 20:49:39.893806  334582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/cert.pem (1123 bytes)
	I0918 20:49:39.893867  334582 exec_runner.go:144] found /home/jenkins/minikube-integration/19667-2236/.minikube/key.pem, removing ...
	I0918 20:49:39.893876  334582 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19667-2236/.minikube/key.pem
	I0918 20:49:39.893901  334582 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19667-2236/.minikube/key.pem (1675 bytes)
	I0918 20:49:39.893949  334582 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem org=jenkins.embed-certs-845058 san=[127.0.0.1 192.168.85.2 embed-certs-845058 localhost minikube]
	I0918 20:49:40.346122  334582 provision.go:177] copyRemoteCerts
	I0918 20:49:40.346195  334582 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 20:49:40.346245  334582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845058
	I0918 20:49:40.363733  334582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/embed-certs-845058/id_rsa Username:docker}
	I0918 20:49:40.466526  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0918 20:49:40.496180  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 20:49:40.523423  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 20:49:40.551061  334582 provision.go:87] duration metric: took 675.261376ms to configureAuth
	I0918 20:49:40.551101  334582 ubuntu.go:193] setting minikube options for container-runtime
	I0918 20:49:40.551421  334582 config.go:182] Loaded profile config "embed-certs-845058": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:49:40.551490  334582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845058
	I0918 20:49:40.569140  334582 main.go:141] libmachine: Using SSH client type: native
	I0918 20:49:40.569412  334582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I0918 20:49:40.569429  334582 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0918 20:49:40.715844  334582 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0918 20:49:40.715868  334582 ubuntu.go:71] root file system type: overlay
	I0918 20:49:40.715989  334582 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0918 20:49:40.716065  334582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845058
	I0918 20:49:40.733271  334582 main.go:141] libmachine: Using SSH client type: native
	I0918 20:49:40.733521  334582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I0918 20:49:40.733604  334582 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0918 20:49:40.897877  334582 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0918 20:49:40.897967  334582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845058
	I0918 20:49:40.918278  334582 main.go:141] libmachine: Using SSH client type: native
	I0918 20:49:40.918631  334582 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I0918 20:49:40.918658  334582 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0918 20:49:41.788823  334582 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-18 20:49:40.892738738 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0918 20:49:41.789013  334582 machine.go:96] duration metric: took 2.442253266s to provisionDockerMachine
	I0918 20:49:41.789092  334582 client.go:171] duration metric: took 8.980256495s to LocalClient.Create
	I0918 20:49:41.789143  334582 start.go:167] duration metric: took 8.980359189s to libmachine.API.Create "embed-certs-845058"
	I0918 20:49:41.789161  334582 start.go:293] postStartSetup for "embed-certs-845058" (driver="docker")
	I0918 20:49:41.789192  334582 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 20:49:41.789319  334582 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 20:49:41.789396  334582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845058
	I0918 20:49:41.828883  334582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/embed-certs-845058/id_rsa Username:docker}
	I0918 20:49:41.963830  334582 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 20:49:41.969496  334582 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 20:49:41.969543  334582 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 20:49:41.969561  334582 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 20:49:41.969569  334582 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0918 20:49:41.969580  334582 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-2236/.minikube/addons for local assets ...
	I0918 20:49:41.969647  334582 filesync.go:126] Scanning /home/jenkins/minikube-integration/19667-2236/.minikube/files for local assets ...
	I0918 20:49:41.969749  334582 filesync.go:149] local asset: /home/jenkins/minikube-integration/19667-2236/.minikube/files/etc/ssl/certs/75652.pem -> 75652.pem in /etc/ssl/certs
	I0918 20:49:41.969881  334582 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 20:49:41.980828  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/files/etc/ssl/certs/75652.pem --> /etc/ssl/certs/75652.pem (1708 bytes)
	I0918 20:49:42.010174  334582 start.go:296] duration metric: took 220.993733ms for postStartSetup
	I0918 20:49:42.010613  334582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-845058
	I0918 20:49:42.043103  334582 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/config.json ...
	I0918 20:49:42.043516  334582 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:49:42.043571  334582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845058
	I0918 20:49:42.068269  334582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/embed-certs-845058/id_rsa Username:docker}
	I0918 20:49:42.169406  334582 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 20:49:42.175072  334582 start.go:128] duration metric: took 9.369744455s to createHost
	I0918 20:49:42.175111  334582 start.go:83] releasing machines lock for "embed-certs-845058", held for 9.369911623s
	I0918 20:49:42.175228  334582 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-845058
	I0918 20:49:42.195989  334582 ssh_runner.go:195] Run: cat /version.json
	I0918 20:49:42.196017  334582 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 20:49:42.196051  334582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845058
	I0918 20:49:42.196124  334582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845058
	I0918 20:49:42.224001  334582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/embed-certs-845058/id_rsa Username:docker}
	I0918 20:49:42.232979  334582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/embed-certs-845058/id_rsa Username:docker}
	I0918 20:49:42.495186  334582 ssh_runner.go:195] Run: systemctl --version
	I0918 20:49:42.500036  334582 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 20:49:42.504927  334582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0918 20:49:42.532895  334582 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0918 20:49:42.533024  334582 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 20:49:42.562971  334582 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0918 20:49:42.563049  334582 start.go:495] detecting cgroup driver to use...
	I0918 20:49:42.563103  334582 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 20:49:42.563222  334582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:49:42.581624  334582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0918 20:49:42.593399  334582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0918 20:49:42.605481  334582 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0918 20:49:42.605605  334582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0918 20:49:42.617336  334582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 20:49:42.627812  334582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0918 20:49:42.639423  334582 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0918 20:49:42.650524  334582 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 20:49:42.660449  334582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0918 20:49:42.670749  334582 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0918 20:49:42.681530  334582 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0918 20:49:42.692623  334582 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 20:49:42.702702  334582 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 20:49:42.713474  334582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:49:42.812230  334582 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0918 20:49:42.905165  334582 start.go:495] detecting cgroup driver to use...
	I0918 20:49:42.905214  334582 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0918 20:49:42.905266  334582 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0918 20:49:42.921673  334582 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0918 20:49:42.921756  334582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0918 20:49:42.939220  334582 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 20:49:42.959993  334582 ssh_runner.go:195] Run: which cri-dockerd
	I0918 20:49:42.963852  334582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0918 20:49:42.975087  334582 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0918 20:49:42.997666  334582 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0918 20:49:43.112574  334582 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0918 20:49:43.228052  334582 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0918 20:49:43.228199  334582 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0918 20:49:43.247318  334582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:49:43.355108  334582 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0918 20:49:43.687006  334582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0918 20:49:43.700933  334582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 20:49:43.714165  334582 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0918 20:49:43.813481  334582 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0918 20:49:43.943652  334582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:49:44.044737  334582 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0918 20:49:44.062882  334582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0918 20:49:44.077362  334582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:49:44.173453  334582 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0918 20:49:44.254364  334582 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0918 20:49:44.254455  334582 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0918 20:49:44.258587  334582 start.go:563] Will wait 60s for crictl version
	I0918 20:49:44.258687  334582 ssh_runner.go:195] Run: which crictl
	I0918 20:49:44.262759  334582 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 20:49:44.300937  334582 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0918 20:49:44.301041  334582 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 20:49:44.325955  334582 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0918 20:49:44.357458  334582 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0918 20:49:44.357597  334582 cli_runner.go:164] Run: docker network inspect embed-certs-845058 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 20:49:44.376267  334582 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0918 20:49:44.380280  334582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:49:44.393037  334582 kubeadm.go:883] updating cluster {Name:embed-certs-845058 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-845058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0918 20:49:44.393170  334582 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 20:49:44.393239  334582 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 20:49:44.434026  334582 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 20:49:44.434063  334582 docker.go:615] Images already preloaded, skipping extraction
	I0918 20:49:44.434126  334582 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0918 20:49:44.454818  334582 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0918 20:49:44.454844  334582 cache_images.go:84] Images are preloaded, skipping loading
	I0918 20:49:44.454854  334582 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 docker true true} ...
	I0918 20:49:44.454950  334582 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-845058 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-845058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0918 20:49:44.455023  334582 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0918 20:49:44.519308  334582 cni.go:84] Creating CNI manager for ""
	I0918 20:49:44.519337  334582 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 20:49:44.519347  334582 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0918 20:49:44.519367  334582 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-845058 NodeName:embed-certs-845058 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 20:49:44.519514  334582 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-845058"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 20:49:44.519590  334582 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0918 20:49:44.529547  334582 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 20:49:44.529622  334582 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 20:49:44.540922  334582 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0918 20:49:44.564820  334582 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 20:49:44.587283  334582 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0918 20:49:44.607006  334582 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0918 20:49:44.610573  334582 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 20:49:44.621689  334582 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 20:49:44.715311  334582 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0918 20:49:44.731767  334582 certs.go:68] Setting up /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058 for IP: 192.168.85.2
	I0918 20:49:44.731805  334582 certs.go:194] generating shared ca certs ...
	I0918 20:49:44.731821  334582 certs.go:226] acquiring lock for ca certs: {Name:mk958e02b356056556309ee300f2f34fdfb18284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:49:44.732039  334582 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key
	I0918 20:49:44.732129  334582 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key
	I0918 20:49:44.732144  334582 certs.go:256] generating profile certs ...
	I0918 20:49:44.732217  334582 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/client.key
	I0918 20:49:44.732251  334582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/client.crt with IP's: []
	I0918 20:49:45.629091  334582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/client.crt ...
	I0918 20:49:45.629123  334582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/client.crt: {Name:mkcbc74b90dfa1bb5b76011ff48aca240dc5b200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:49:45.629321  334582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/client.key ...
	I0918 20:49:45.629334  334582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/client.key: {Name:mk4f312a368e564507640396d203960a3cf81b92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:49:45.629885  334582 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.key.50540dbf
	I0918 20:49:45.629946  334582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.crt.50540dbf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0918 20:49:46.188870  334582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.crt.50540dbf ...
	I0918 20:49:46.188909  334582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.crt.50540dbf: {Name:mkd52b76ede4eb95be609b89883c3c28f670780a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:49:46.189594  334582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.key.50540dbf ...
	I0918 20:49:46.189613  334582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.key.50540dbf: {Name:mkdc630b578035aae096332c0a78c2ae5bfb3ab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:49:46.190138  334582 certs.go:381] copying /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.crt.50540dbf -> /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.crt
	I0918 20:49:46.190232  334582 certs.go:385] copying /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.key.50540dbf -> /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.key
	I0918 20:49:46.190294  334582 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/proxy-client.key
	I0918 20:49:46.190314  334582 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/proxy-client.crt with IP's: []
	I0918 20:49:47.067007  334582 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/proxy-client.crt ...
	I0918 20:49:47.067037  334582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/proxy-client.crt: {Name:mk7d1273c6202faa590851d46d61977105c2b3a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:49:47.067683  334582 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/proxy-client.key ...
	I0918 20:49:47.067702  334582 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/proxy-client.key: {Name:mk69441fc00691428a4770d4e14837dfc49b7c63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 20:49:47.067896  334582 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/7565.pem (1338 bytes)
	W0918 20:49:47.067941  334582 certs.go:480] ignoring /home/jenkins/minikube-integration/19667-2236/.minikube/certs/7565_empty.pem, impossibly tiny 0 bytes
	I0918 20:49:47.067953  334582 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca-key.pem (1679 bytes)
	I0918 20:49:47.067978  334582 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/ca.pem (1078 bytes)
	I0918 20:49:47.068009  334582 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/cert.pem (1123 bytes)
	I0918 20:49:47.068035  334582 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/certs/key.pem (1675 bytes)
	I0918 20:49:47.068079  334582 certs.go:484] found cert: /home/jenkins/minikube-integration/19667-2236/.minikube/files/etc/ssl/certs/75652.pem (1708 bytes)
	I0918 20:49:47.068680  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 20:49:47.096879  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0918 20:49:47.133991  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 20:49:47.162503  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 20:49:47.192135  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0918 20:49:47.220286  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 20:49:47.246068  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 20:49:47.274801  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/embed-certs-845058/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 20:49:47.301895  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/certs/7565.pem --> /usr/share/ca-certificates/7565.pem (1338 bytes)
	I0918 20:49:47.330237  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/files/etc/ssl/certs/75652.pem --> /usr/share/ca-certificates/75652.pem (1708 bytes)
	I0918 20:49:47.356757  334582 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19667-2236/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 20:49:47.383179  334582 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 20:49:47.403802  334582 ssh_runner.go:195] Run: openssl version
	I0918 20:49:47.410556  334582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7565.pem && ln -fs /usr/share/ca-certificates/7565.pem /etc/ssl/certs/7565.pem"
	I0918 20:49:47.421408  334582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7565.pem
	I0918 20:49:47.425419  334582 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:53 /usr/share/ca-certificates/7565.pem
	I0918 20:49:47.425528  334582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7565.pem
	I0918 20:49:47.433103  334582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7565.pem /etc/ssl/certs/51391683.0"
	I0918 20:49:47.443704  334582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75652.pem && ln -fs /usr/share/ca-certificates/75652.pem /etc/ssl/certs/75652.pem"
	I0918 20:49:47.454067  334582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75652.pem
	I0918 20:49:47.458005  334582 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:53 /usr/share/ca-certificates/75652.pem
	I0918 20:49:47.458120  334582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75652.pem
	I0918 20:49:47.465439  334582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75652.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 20:49:47.475544  334582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 20:49:47.486515  334582 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:49:47.490564  334582 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 18 19:38 /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:49:47.490633  334582 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 20:49:47.499049  334582 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 20:49:47.508990  334582 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0918 20:49:47.512915  334582 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0918 20:49:47.512967  334582 kubeadm.go:392] StartCluster: {Name:embed-certs-845058 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-845058 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 20:49:47.513108  334582 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0918 20:49:47.530806  334582 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 20:49:47.540086  334582 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 20:49:47.549551  334582 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0918 20:49:47.549667  334582 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 20:49:47.560360  334582 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 20:49:47.560433  334582 kubeadm.go:157] found existing configuration files:
	
	I0918 20:49:47.560529  334582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0918 20:49:47.571101  334582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0918 20:49:47.571237  334582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0918 20:49:47.581829  334582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0918 20:49:47.592166  334582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0918 20:49:47.592262  334582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0918 20:49:47.602476  334582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0918 20:49:47.611746  334582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0918 20:49:47.611840  334582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0918 20:49:47.620982  334582 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0918 20:49:47.631261  334582 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0918 20:49:47.631359  334582 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0918 20:49:47.640977  334582 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0918 20:49:47.688181  334582 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0918 20:49:47.688246  334582 kubeadm.go:310] [preflight] Running pre-flight checks
	I0918 20:49:47.712532  334582 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0918 20:49:47.712609  334582 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0918 20:49:47.712650  334582 kubeadm.go:310] OS: Linux
	I0918 20:49:47.712699  334582 kubeadm.go:310] CGROUPS_CPU: enabled
	I0918 20:49:47.712751  334582 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0918 20:49:47.712801  334582 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0918 20:49:47.712852  334582 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0918 20:49:47.712904  334582 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0918 20:49:47.712955  334582 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0918 20:49:47.713004  334582 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0918 20:49:47.713054  334582 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0918 20:49:47.713104  334582 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0918 20:49:47.778667  334582 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 20:49:47.778781  334582 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 20:49:47.778876  334582 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0918 20:49:47.798479  334582 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 20:49:48.421610  321053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:49:48.437718  321053 api_server.go:72] duration metric: took 5m49.267006218s to wait for apiserver process to appear ...
	I0918 20:49:48.437743  321053 api_server.go:88] waiting for apiserver healthz status ...
	I0918 20:49:48.437839  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0918 20:49:48.461088  321053 logs.go:276] 2 containers: [93890fa25a1b 42d259a39ced]
	I0918 20:49:48.461168  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0918 20:49:48.500728  321053 logs.go:276] 2 containers: [468c9a428546 ee3fb21586d8]
	I0918 20:49:48.500810  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0918 20:49:48.543477  321053 logs.go:276] 2 containers: [c469388f2e86 22f71e25c69d]
	I0918 20:49:48.543560  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0918 20:49:48.571191  321053 logs.go:276] 2 containers: [0b5c2c549d85 d7d92bf388a8]
	I0918 20:49:48.571293  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0918 20:49:48.594937  321053 logs.go:276] 2 containers: [576f1c60a0bf 2906bf503bae]
	I0918 20:49:48.595034  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0918 20:49:48.624755  321053 logs.go:276] 2 containers: [d5f549195fea 04f0f084f259]
	I0918 20:49:48.624839  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0918 20:49:48.659558  321053 logs.go:276] 0 containers: []
	W0918 20:49:48.659580  321053 logs.go:278] No container was found matching "kindnet"
	I0918 20:49:48.659650  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0918 20:49:48.690180  321053 logs.go:276] 1 containers: [57496d116106]
	I0918 20:49:48.690328  321053 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0918 20:49:48.725188  321053 logs.go:276] 2 containers: [d98463859c48 ebd705b772fc]
	I0918 20:49:48.725268  321053 logs.go:123] Gathering logs for kube-scheduler [0b5c2c549d85] ...
	I0918 20:49:48.725294  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5c2c549d85"
	I0918 20:49:48.778547  321053 logs.go:123] Gathering logs for kube-proxy [576f1c60a0bf] ...
	I0918 20:49:48.778626  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 576f1c60a0bf"
	I0918 20:49:48.808196  321053 logs.go:123] Gathering logs for kube-controller-manager [04f0f084f259] ...
	I0918 20:49:48.808277  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04f0f084f259"
	I0918 20:49:48.871563  321053 logs.go:123] Gathering logs for etcd [468c9a428546] ...
	I0918 20:49:48.871642  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c9a428546"
	I0918 20:49:48.904695  321053 logs.go:123] Gathering logs for etcd [ee3fb21586d8] ...
	I0918 20:49:48.904769  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee3fb21586d8"
	I0918 20:49:48.940711  321053 logs.go:123] Gathering logs for Docker ...
	I0918 20:49:48.940787  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0918 20:49:48.968858  321053 logs.go:123] Gathering logs for kube-proxy [2906bf503bae] ...
	I0918 20:49:48.968935  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2906bf503bae"
	I0918 20:49:48.991645  321053 logs.go:123] Gathering logs for storage-provisioner [ebd705b772fc] ...
	I0918 20:49:48.991670  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebd705b772fc"
	I0918 20:49:49.013606  321053 logs.go:123] Gathering logs for describe nodes ...
	I0918 20:49:49.013631  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0918 20:49:49.190708  321053 logs.go:123] Gathering logs for kube-apiserver [42d259a39ced] ...
	I0918 20:49:49.190799  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 42d259a39ced"
	I0918 20:49:49.263333  321053 logs.go:123] Gathering logs for coredns [c469388f2e86] ...
	I0918 20:49:49.263420  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c469388f2e86"
	I0918 20:49:49.292225  321053 logs.go:123] Gathering logs for kube-controller-manager [d5f549195fea] ...
	I0918 20:49:49.292252  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5f549195fea"
	I0918 20:49:49.339748  321053 logs.go:123] Gathering logs for kubelet ...
	I0918 20:49:49.339818  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0918 20:49:49.419853  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:13 old-k8s-version-959748 kubelet[1382]: E0918 20:44:13.544419    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:49.421347  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:13 old-k8s-version-959748 kubelet[1382]: E0918 20:44:13.821194    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.422021  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:14 old-k8s-version-959748 kubelet[1382]: E0918 20:44:14.888406    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.424781  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:26 old-k8s-version-959748 kubelet[1382]: E0918 20:44:26.455937    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:49.432054  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:37 old-k8s-version-959748 kubelet[1382]: E0918 20:44:37.301443    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:49.432317  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:38 old-k8s-version-959748 kubelet[1382]: E0918 20:44:38.321604    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.432524  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:40 old-k8s-version-959748 kubelet[1382]: E0918 20:44:40.410130    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.433318  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:44 old-k8s-version-959748 kubelet[1382]: E0918 20:44:44.392000    1382 pod_workers.go:191] Error syncing pod 773bb59a-fa3d-4310-a265-018dd10517a1 ("storage-provisioner_kube-system(773bb59a-fa3d-4310-a265-018dd10517a1)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(773bb59a-fa3d-4310-a265-018dd10517a1)"
	W0918 20:49:49.437488  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:51 old-k8s-version-959748 kubelet[1382]: E0918 20:44:51.428423    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:49.441416  321053 logs.go:138] Found kubelet problem: Sep 18 20:44:53 old-k8s-version-959748 kubelet[1382]: E0918 20:44:53.973975    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:49.441777  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:06 old-k8s-version-959748 kubelet[1382]: E0918 20:45:06.402554    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.441999  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:09 old-k8s-version-959748 kubelet[1382]: E0918 20:45:09.412890    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.442203  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:21 old-k8s-version-959748 kubelet[1382]: E0918 20:45:21.403236    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.444464  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:24 old-k8s-version-959748 kubelet[1382]: E0918 20:45:24.993892    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:49.446545  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:35 old-k8s-version-959748 kubelet[1382]: E0918 20:45:35.427494    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:49.446765  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:36 old-k8s-version-959748 kubelet[1382]: E0918 20:45:36.425415    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.446979  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:47 old-k8s-version-959748 kubelet[1382]: E0918 20:45:47.404058    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.447198  321053 logs.go:138] Found kubelet problem: Sep 18 20:45:50 old-k8s-version-959748 kubelet[1382]: E0918 20:45:50.420469    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.447418  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:01 old-k8s-version-959748 kubelet[1382]: E0918 20:46:01.422276    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.447639  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:02 old-k8s-version-959748 kubelet[1382]: E0918 20:46:02.406518    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.447845  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:14 old-k8s-version-959748 kubelet[1382]: E0918 20:46:14.403352    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.452525  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:14 old-k8s-version-959748 kubelet[1382]: E0918 20:46:14.981455    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:49.452757  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:26 old-k8s-version-959748 kubelet[1382]: E0918 20:46:26.405462    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.452965  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:28 old-k8s-version-959748 kubelet[1382]: E0918 20:46:28.402770    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.453193  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:37 old-k8s-version-959748 kubelet[1382]: E0918 20:46:37.433602    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.453398  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:41 old-k8s-version-959748 kubelet[1382]: E0918 20:46:41.402626    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.453615  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:48 old-k8s-version-959748 kubelet[1382]: E0918 20:46:48.402441    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.453823  321053 logs.go:138] Found kubelet problem: Sep 18 20:46:55 old-k8s-version-959748 kubelet[1382]: E0918 20:46:55.402485    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.454038  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:01 old-k8s-version-959748 kubelet[1382]: E0918 20:47:01.402464    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.456137  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:10 old-k8s-version-959748 kubelet[1382]: E0918 20:47:10.427537    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0918 20:49:49.456363  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:14 old-k8s-version-959748 kubelet[1382]: E0918 20:47:14.402961    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.456573  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:23 old-k8s-version-959748 kubelet[1382]: E0918 20:47:23.402466    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.456807  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:25 old-k8s-version-959748 kubelet[1382]: E0918 20:47:25.402567    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.457010  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:35 old-k8s-version-959748 kubelet[1382]: E0918 20:47:35.404622    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.459270  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:41 old-k8s-version-959748 kubelet[1382]: E0918 20:47:41.114133    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0918 20:49:49.459485  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:50 old-k8s-version-959748 kubelet[1382]: E0918 20:47:50.403588    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.459700  321053 logs.go:138] Found kubelet problem: Sep 18 20:47:54 old-k8s-version-959748 kubelet[1382]: E0918 20:47:54.411522    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.463469  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:01 old-k8s-version-959748 kubelet[1382]: E0918 20:48:01.402535    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.463695  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:08 old-k8s-version-959748 kubelet[1382]: E0918 20:48:08.403570    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.463899  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:14 old-k8s-version-959748 kubelet[1382]: E0918 20:48:14.405384    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.464110  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:21 old-k8s-version-959748 kubelet[1382]: E0918 20:48:21.402696    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.464318  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:28 old-k8s-version-959748 kubelet[1382]: E0918 20:48:28.405928    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.464536  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:34 old-k8s-version-959748 kubelet[1382]: E0918 20:48:34.402538    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.464746  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:39 old-k8s-version-959748 kubelet[1382]: E0918 20:48:39.402531    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.464978  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:46 old-k8s-version-959748 kubelet[1382]: E0918 20:48:46.412489    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.465164  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:54 old-k8s-version-959748 kubelet[1382]: E0918 20:48:54.402285    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.465357  321053 logs.go:138] Found kubelet problem: Sep 18 20:48:58 old-k8s-version-959748 kubelet[1382]: E0918 20:48:58.405409    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.465537  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:05 old-k8s-version-959748 kubelet[1382]: E0918 20:49:05.405649    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.465734  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:11 old-k8s-version-959748 kubelet[1382]: E0918 20:49:11.402291    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.465916  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:16 old-k8s-version-959748 kubelet[1382]: E0918 20:49:16.402969    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.466108  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:24 old-k8s-version-959748 kubelet[1382]: E0918 20:49:24.404436    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.466289  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:31 old-k8s-version-959748 kubelet[1382]: E0918 20:49:31.403134    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.466481  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:37 old-k8s-version-959748 kubelet[1382]: E0918 20:49:37.402868    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.466665  321053 logs.go:138] Found kubelet problem: Sep 18 20:49:45 old-k8s-version-959748 kubelet[1382]: E0918 20:49:45.406022    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 20:49:49.466672  321053 logs.go:123] Gathering logs for dmesg ...
	I0918 20:49:49.466688  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0918 20:49:49.507155  321053 logs.go:123] Gathering logs for kube-scheduler [d7d92bf388a8] ...
	I0918 20:49:49.507181  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7d92bf388a8"
	I0918 20:49:49.532579  321053 logs.go:123] Gathering logs for kubernetes-dashboard [57496d116106] ...
	I0918 20:49:49.532650  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 57496d116106"
	I0918 20:49:49.565575  321053 logs.go:123] Gathering logs for storage-provisioner [d98463859c48] ...
	I0918 20:49:49.565644  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d98463859c48"
	I0918 20:49:49.603377  321053 logs.go:123] Gathering logs for container status ...
	I0918 20:49:49.603453  321053 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0918 20:49:49.682568  321053 logs.go:123] Gathering logs for kube-apiserver [93890fa25a1b] ...
	I0918 20:49:49.682646  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93890fa25a1b"
	I0918 20:49:49.758539  321053 logs.go:123] Gathering logs for coredns [22f71e25c69d] ...
	I0918 20:49:49.758609  321053 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22f71e25c69d"
	I0918 20:49:49.793203  321053 out.go:358] Setting ErrFile to fd 2...
	I0918 20:49:49.793261  321053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0918 20:49:49.793327  321053 out.go:270] X Problems detected in kubelet:
	W0918 20:49:49.793381  321053 out.go:270]   Sep 18 20:49:16 old-k8s-version-959748 kubelet[1382]: E0918 20:49:16.402969    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.793455  321053 out.go:270]   Sep 18 20:49:24 old-k8s-version-959748 kubelet[1382]: E0918 20:49:24.404436    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.793501  321053 out.go:270]   Sep 18 20:49:31 old-k8s-version-959748 kubelet[1382]: E0918 20:49:31.403134    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.793542  321053 out.go:270]   Sep 18 20:49:37 old-k8s-version-959748 kubelet[1382]: E0918 20:49:37.402868    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0918 20:49:49.793575  321053 out.go:270]   Sep 18 20:49:45 old-k8s-version-959748 kubelet[1382]: E0918 20:49:45.406022    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0918 20:49:49.793616  321053 out.go:358] Setting ErrFile to fd 2...
	I0918 20:49:49.793638  321053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:49:47.804130  334582 out.go:235]   - Generating certificates and keys ...
	I0918 20:49:47.804336  334582 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0918 20:49:47.804457  334582 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0918 20:49:49.700434  334582 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 20:49:50.042175  334582 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0918 20:49:51.016606  334582 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0918 20:49:51.453776  334582 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0918 20:49:52.130475  334582 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0918 20:49:52.130839  334582 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-845058 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0918 20:49:52.425740  334582 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0918 20:49:52.426048  334582 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-845058 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0918 20:49:52.744891  334582 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 20:49:53.111586  334582 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 20:49:53.352685  334582 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0918 20:49:53.352979  334582 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 20:49:53.626674  334582 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 20:49:54.328314  334582 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0918 20:49:55.454591  334582 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 20:49:55.695699  334582 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 20:49:56.335688  334582 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 20:49:56.336495  334582 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 20:49:56.339660  334582 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 20:49:56.343356  334582 out.go:235]   - Booting up control plane ...
	I0918 20:49:56.343470  334582 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 20:49:56.343549  334582 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 20:49:56.343616  334582 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 20:49:56.368531  334582 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 20:49:56.376158  334582 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 20:49:56.376246  334582 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0918 20:49:56.491529  334582 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0918 20:49:56.491678  334582 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0918 20:49:59.794427  321053 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0918 20:49:59.807734  321053 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0918 20:49:59.811016  321053 out.go:201] 
	W0918 20:49:59.815069  321053 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0918 20:49:59.815113  321053 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0918 20:49:59.815138  321053 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0918 20:49:59.815147  321053 out.go:270] * 
	W0918 20:49:59.816029  321053 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 20:49:59.817787  321053 out.go:201] 
	
	
	==> Docker <==
	Sep 18 20:44:53 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:44:53.755331116Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:44:53 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:44:53.970967507Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:44:53 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:44:53.971179342Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:44:53 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:44:53.971209003Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 18 20:45:24 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:45:24.773797669Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:45:24 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:45:24.989540833Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:45:24 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:45:24.989660264Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:45:24 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:45:24.989687045Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 18 20:45:35 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:45:35.422894701Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 18 20:45:35 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:45:35.422950043Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 18 20:45:35 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:45:35.426459943Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 18 20:46:14 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:46:14.768906719Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:46:14 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:46:14.977410531Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:46:14 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:46:14.977525162Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:46:14 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:46:14.977554355Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 18 20:47:10 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:47:10.423829841Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 18 20:47:10 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:47:10.423883780Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 18 20:47:10 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:47:10.426682898Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 18 20:47:40 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:47:40.789203479Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:47:41 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:47:41.109578061Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:47:41 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:47:41.109849482Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 18 20:47:41 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:47:41.109890006Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 18 20:49:59 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:49:59.426643968Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 18 20:49:59 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:49:59.426698080Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 18 20:49:59 old-k8s-version-959748 dockerd[1083]: time="2024-09-18T20:49:59.429224898Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d98463859c488       ba04bb24b9575                                                                                         5 minutes ago       Running             storage-provisioner       2                   e0c8aa86668b1       storage-provisioner
	57496d1161067       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   a88c8ea194f64       kubernetes-dashboard-cd95d586-llwmw
	58fcf563c4bee       1611cd07b61d5                                                                                         5 minutes ago       Running             busybox                   1                   e109df324bf60       busybox
	ebd705b772fca       ba04bb24b9575                                                                                         5 minutes ago       Exited              storage-provisioner       1                   e0c8aa86668b1       storage-provisioner
	c469388f2e868       db91994f4ee8f                                                                                         5 minutes ago       Running             coredns                   1                   367d74f47a496       coredns-74ff55c5b-wqss6
	576f1c60a0bfa       25a5233254979                                                                                         5 minutes ago       Running             kube-proxy                1                   758ccdecdd1db       kube-proxy-6qhft
	468c9a4285464       05b738aa1bc63                                                                                         5 minutes ago       Running             etcd                      1                   1deb4050924d7       etcd-old-k8s-version-959748
	0b5c2c549d85d       e7605f88f17d6                                                                                         5 minutes ago       Running             kube-scheduler            1                   c96e49a389743       kube-scheduler-old-k8s-version-959748
	d5f549195fea5       1df8a2b116bd1                                                                                         5 minutes ago       Running             kube-controller-manager   1                   43a5abfe4daf8       kube-controller-manager-old-k8s-version-959748
	93890fa25a1b8       2c08bbbc02d3a                                                                                         5 minutes ago       Running             kube-apiserver            1                   d321235ab620a       kube-apiserver-old-k8s-version-959748
	f959bb1ad0482       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              busybox                   0                   50149ea593fca       busybox
	22f71e25c69d2       db91994f4ee8f                                                                                         8 minutes ago       Exited              coredns                   0                   e3ad791a1e240       coredns-74ff55c5b-wqss6
	2906bf503bae7       25a5233254979                                                                                         8 minutes ago       Exited              kube-proxy                0                   c79cf2ee1878e       kube-proxy-6qhft
	d7d92bf388a8b       e7605f88f17d6                                                                                         8 minutes ago       Exited              kube-scheduler            0                   cb2a6e0d50a1c       kube-scheduler-old-k8s-version-959748
	42d259a39ced3       2c08bbbc02d3a                                                                                         8 minutes ago       Exited              kube-apiserver            0                   2a1a2632a4935       kube-apiserver-old-k8s-version-959748
	04f0f084f259c       1df8a2b116bd1                                                                                         8 minutes ago       Exited              kube-controller-manager   0                   54f09ab7f63f4       kube-controller-manager-old-k8s-version-959748
	ee3fb21586d8e       05b738aa1bc63                                                                                         8 minutes ago       Exited              etcd                      0                   52a1ff82b6eaf       etcd-old-k8s-version-959748
	
	
	==> coredns [22f71e25c69d] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c469388f2e86] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:45912 - 49544 "HINFO IN 1739150858779864022.6070968821236932526. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00789715s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-959748
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-959748
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=85073601a832bd4bbda5d11fa91feafff6ec6b91
	                    minikube.k8s.io/name=old-k8s-version-959748
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_18T20_41_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 18 Sep 2024 20:41:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-959748
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 18 Sep 2024 20:49:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 18 Sep 2024 20:45:12 +0000   Wed, 18 Sep 2024 20:41:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 18 Sep 2024 20:45:12 +0000   Wed, 18 Sep 2024 20:41:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 18 Sep 2024 20:45:12 +0000   Wed, 18 Sep 2024 20:41:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 18 Sep 2024 20:45:12 +0000   Wed, 18 Sep 2024 20:41:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-959748
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 f0043dedd6ef455d9da5c5d22e7b3992
	  System UUID:                64e7087e-9067-43ff-a088-de3fd35bbea0
	  Boot ID:                    89948b1e-c5b8-41d2-bbb3-b80b856868d6
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 coredns-74ff55c5b-wqss6                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m23s
	  kube-system                 etcd-old-k8s-version-959748                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m35s
	  kube-system                 kube-apiserver-old-k8s-version-959748             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 kube-controller-manager-old-k8s-version-959748    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 kube-proxy-6qhft                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-scheduler-old-k8s-version-959748             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 metrics-server-9975d5f86-jxd9z                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m23s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-5x67x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-llwmw               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (4%)  170Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m50s (x5 over 8m50s)  kubelet     Node old-k8s-version-959748 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m50s (x5 over 8m50s)  kubelet     Node old-k8s-version-959748 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m50s (x4 over 8m50s)  kubelet     Node old-k8s-version-959748 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m36s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m35s                  kubelet     Node old-k8s-version-959748 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s                  kubelet     Node old-k8s-version-959748 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s                  kubelet     Node old-k8s-version-959748 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m25s                  kubelet     Node old-k8s-version-959748 status is now: NodeReady
	  Normal  Starting                 8m21s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m (x8 over 6m)        kubelet     Node old-k8s-version-959748 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)        kubelet     Node old-k8s-version-959748 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x7 over 6m)        kubelet     Node old-k8s-version-959748 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m                     kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m49s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Sep18 19:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015410] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.490719] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.720496] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.132493] kauditd_printk_skb: 36 callbacks suppressed
	[Sep18 19:57] FS-Cache: Duplicate cookie detected
	[  +0.000813] FS-Cache: O-cookie c=00000012 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001010] FS-Cache: O-cookie d=000000003fcd60b4{9P.session} n=00000000e5323280
	[  +0.001133] FS-Cache: O-key=[10] '34323935343933353136'
	[  +0.000800] FS-Cache: N-cookie c=00000013 [p=00000002 fl=2 nc=0 na=1]
	[  +0.001021] FS-Cache: N-cookie d=000000003fcd60b4{9P.session} n=00000000727eeb3f
	[  +0.001172] FS-Cache: N-key=[10] '34323935343933353136'
	[Sep18 19:59] hrtimer: interrupt took 19099168 ns
	[Sep18 20:03] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
	[Sep18 20:32] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [468c9a428546] <==
	2024-09-18 20:45:55.587608 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:46:05.587657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:46:15.587522 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:46:25.587630 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:46:35.587621 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:46:45.587692 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:46:55.587560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:47:05.587604 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:47:15.587654 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:47:25.587743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:47:35.587722 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:47:45.587770 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:47:55.587574 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:48:05.587754 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:48:15.587516 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:48:25.587838 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:48:35.587565 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:48:45.587559 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:48:55.587575 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:49:05.587532 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:49:15.587729 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:49:25.591030 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:49:35.587518 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:49:45.587635 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:49:55.588144 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [ee3fb21586d8] <==
	2024-09-18 20:41:13.536994 I | etcdserver: published {Name:old-k8s-version-959748 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-09-18 20:41:13.537245 I | embed: ready to serve client requests
	2024-09-18 20:41:13.544825 I | embed: serving client requests on 192.168.76.2:2379
	2024-09-18 20:41:13.546708 I | embed: ready to serve client requests
	2024-09-18 20:41:13.558877 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-18 20:41:13.591292 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-18 20:41:13.592631 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-18 20:41:22.012388 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:41:30.805819 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:41:35.144031 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:41:45.144801 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:41:55.144309 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:42:05.144583 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:42:15.144493 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:42:25.144201 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:42:35.144269 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:42:45.167750 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:42:55.144232 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:43:05.146649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:43:15.144310 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:43:25.144753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:43:35.144317 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-18 20:43:39.660523 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2024/09/18 20:43:39 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	2024-09-18 20:43:39.744568 I | etcdserver: skipped leadership transfer for single voting member cluster
	
	
	==> kernel <==
	 20:50:02 up  1:32,  0 users,  load average: 1.90, 2.29, 2.94
	Linux old-k8s-version-959748 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [42d259a39ced] <==
	W0918 20:43:39.695909       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.695956       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.696002       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.696046       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.696091       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.696138       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.696184       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.696228       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.709184       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.709266       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.709315       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.709360       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.709404       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0918 20:43:39.710476       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0918 20:43:39.710631       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0918 20:43:39.710700       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.710767       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.710802       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.710833       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.710866       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.710901       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.710936       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.710969       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.711003       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0918 20:43:39.715379       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [93890fa25a1b] <==
	I0918 20:47:07.089941       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 20:47:07.089949       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0918 20:47:14.292295       1 handler_proxy.go:102] no RequestInfo found in the context
	E0918 20:47:14.292378       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0918 20:47:14.292389       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 20:47:37.449026       1 client.go:360] parsed scheme: "passthrough"
	I0918 20:47:37.449072       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 20:47:37.449081       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0918 20:48:15.355151       1 client.go:360] parsed scheme: "passthrough"
	I0918 20:48:15.355199       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 20:48:15.355207       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0918 20:48:51.596087       1 client.go:360] parsed scheme: "passthrough"
	I0918 20:48:51.596140       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 20:48:51.596150       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0918 20:49:12.337634       1 handler_proxy.go:102] no RequestInfo found in the context
	E0918 20:49:12.337710       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0918 20:49:12.337723       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0918 20:49:22.589561       1 client.go:360] parsed scheme: "passthrough"
	I0918 20:49:22.589608       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 20:49:22.589617       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0918 20:50:00.759445       1 client.go:360] parsed scheme: "passthrough"
	I0918 20:50:00.759500       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0918 20:50:00.759510       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [04f0f084f259] <==
	I0918 20:41:39.801314       1 shared_informer.go:247] Caches are synced for disruption 
	I0918 20:41:39.801327       1 disruption.go:339] Sending events to api server.
	I0918 20:41:39.811861       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6qhft"
	I0918 20:41:39.868284       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0918 20:41:39.886308       1 shared_informer.go:247] Caches are synced for HPA 
	I0918 20:41:39.886400       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0918 20:41:39.915555       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-4c4sc"
	I0918 20:41:39.936825       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0918 20:41:39.941767       1 shared_informer.go:247] Caches are synced for resource quota 
	I0918 20:41:39.941840       1 shared_informer.go:247] Caches are synced for resource quota 
	E0918 20:41:39.955122       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"8affae2a-7ad4-4df6-901d-0479afd95d3c", ResourceVersion:"248", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63862288883, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40014457e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001445800)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001445820), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40015fc880), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001445
840), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001445860), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40014458a0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40014bef60), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40003d6f18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40000adc00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40004013e8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40003d70c8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0918 20:41:39.961810       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	I0918 20:41:39.962529       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-wqss6"
	I0918 20:41:39.985806       1 shared_informer.go:247] Caches are synced for endpoint 
	I0918 20:41:39.985838       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	E0918 20:41:40.027319       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0918 20:41:40.094800       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0918 20:41:40.395022       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0918 20:41:40.452022       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0918 20:41:40.452057       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0918 20:41:41.489355       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0918 20:41:41.517668       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-4c4sc"
	I0918 20:43:38.261239       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0918 20:43:38.553039       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0918 20:43:39.438655       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-jxd9z"
	
	
	==> kube-controller-manager [d5f549195fea] <==
	W0918 20:45:35.567990       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 20:46:01.509015       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 20:46:07.218604       1 request.go:655] Throttling request took 1.048193818s, request: GET:https://192.168.76.2:8443/apis/batch/v1?timeout=32s
	W0918 20:46:08.070099       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 20:46:32.047733       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 20:46:39.720752       1 request.go:655] Throttling request took 1.048336023s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0918 20:46:40.572235       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 20:47:02.550082       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 20:47:12.222820       1 request.go:655] Throttling request took 1.048183975s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0918 20:47:13.074480       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 20:47:33.057312       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 20:47:44.724874       1 request.go:655] Throttling request took 1.048123563s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1?timeout=32s
	W0918 20:47:45.577791       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 20:48:03.559217       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 20:48:17.228275       1 request.go:655] Throttling request took 1.048313514s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0918 20:48:18.079886       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 20:48:34.061376       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 20:48:49.730242       1 request.go:655] Throttling request took 1.048389562s, request: GET:https://192.168.76.2:8443/apis/batch/v1beta1?timeout=32s
	W0918 20:48:50.581765       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 20:49:04.563701       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 20:49:22.232182       1 request.go:655] Throttling request took 1.045863212s, request: GET:https://192.168.76.2:8443/apis/batch/v1beta1?timeout=32s
	W0918 20:49:23.083806       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0918 20:49:35.065907       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0918 20:49:54.734133       1 request.go:655] Throttling request took 1.04808655s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0918 20:49:55.585785       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [2906bf503bae] <==
	I0918 20:41:41.007535       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0918 20:41:41.007637       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0918 20:41:41.076612       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0918 20:41:41.076715       1 server_others.go:185] Using iptables Proxier.
	I0918 20:41:41.076931       1 server.go:650] Version: v1.20.0
	I0918 20:41:41.077438       1 config.go:315] Starting service config controller
	I0918 20:41:41.077456       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0918 20:41:41.079490       1 config.go:224] Starting endpoint slice config controller
	I0918 20:41:41.079534       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0918 20:41:41.178616       1 shared_informer.go:247] Caches are synced for service config 
	I0918 20:41:41.179648       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [576f1c60a0bf] <==
	I0918 20:44:13.636250       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0918 20:44:13.636329       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0918 20:44:13.770841       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0918 20:44:13.770935       1 server_others.go:185] Using iptables Proxier.
	I0918 20:44:13.771224       1 server.go:650] Version: v1.20.0
	I0918 20:44:13.782520       1 config.go:315] Starting service config controller
	I0918 20:44:13.782545       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0918 20:44:13.782565       1 config.go:224] Starting endpoint slice config controller
	I0918 20:44:13.782574       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0918 20:44:13.882676       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0918 20:44:13.882753       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [0b5c2c549d85] <==
	I0918 20:44:07.139868       1 serving.go:331] Generated self-signed cert in-memory
	W0918 20:44:10.902054       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 20:44:10.902292       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 20:44:10.902418       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 20:44:10.902519       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 20:44:11.382544       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0918 20:44:11.382651       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:44:11.382657       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:44:11.382670       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0918 20:44:11.483431       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [d7d92bf388a8] <==
	I0918 20:41:15.579107       1 serving.go:331] Generated self-signed cert in-memory
	W0918 20:41:20.271572       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0918 20:41:20.271604       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 20:41:20.271619       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0918 20:41:20.271625       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 20:41:20.324038       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0918 20:41:20.328523       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0918 20:41:20.343692       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 20:41:20.345485       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0918 20:41:20.362928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 20:41:20.363286       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 20:41:20.363687       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 20:41:20.363932       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 20:41:20.364216       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 20:41:20.364422       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 20:41:20.364628       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 20:41:20.364824       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 20:41:20.365019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 20:41:20.365207       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 20:41:20.365292       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 20:41:20.373312       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 20:41:21.273436       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 20:41:21.346299       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 20:41:21.612320       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0918 20:41:23.745658       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 18 20:47:50 old-k8s-version-959748 kubelet[1382]: E0918 20:47:50.403588    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 20:47:54 old-k8s-version-959748 kubelet[1382]: E0918 20:47:54.411522    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 18 20:48:01 old-k8s-version-959748 kubelet[1382]: E0918 20:48:01.402535    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 20:48:08 old-k8s-version-959748 kubelet[1382]: E0918 20:48:08.403570    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 18 20:48:14 old-k8s-version-959748 kubelet[1382]: E0918 20:48:14.405384    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 20:48:21 old-k8s-version-959748 kubelet[1382]: E0918 20:48:21.402696    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 18 20:48:28 old-k8s-version-959748 kubelet[1382]: E0918 20:48:28.405928    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 20:48:34 old-k8s-version-959748 kubelet[1382]: E0918 20:48:34.402538    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 18 20:48:39 old-k8s-version-959748 kubelet[1382]: E0918 20:48:39.402531    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 20:48:46 old-k8s-version-959748 kubelet[1382]: E0918 20:48:46.412489    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 18 20:48:54 old-k8s-version-959748 kubelet[1382]: E0918 20:48:54.402285    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 20:48:58 old-k8s-version-959748 kubelet[1382]: E0918 20:48:58.405409    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 18 20:49:05 old-k8s-version-959748 kubelet[1382]: E0918 20:49:05.405649    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 20:49:11 old-k8s-version-959748 kubelet[1382]: E0918 20:49:11.402291    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 18 20:49:16 old-k8s-version-959748 kubelet[1382]: E0918 20:49:16.402969    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 20:49:24 old-k8s-version-959748 kubelet[1382]: E0918 20:49:24.404436    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 18 20:49:31 old-k8s-version-959748 kubelet[1382]: E0918 20:49:31.403134    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 20:49:37 old-k8s-version-959748 kubelet[1382]: E0918 20:49:37.402868    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 18 20:49:45 old-k8s-version-959748 kubelet[1382]: E0918 20:49:45.406022    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 18 20:49:49 old-k8s-version-959748 kubelet[1382]: E0918 20:49:49.438489    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	Sep 18 20:49:59 old-k8s-version-959748 kubelet[1382]: E0918 20:49:59.431996    1382 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 18 20:49:59 old-k8s-version-959748 kubelet[1382]: E0918 20:49:59.432455    1382 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 18 20:49:59 old-k8s-version-959748 kubelet[1382]: E0918 20:49:59.432692    1382 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-tz77k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-jxd9z_kube-system(3996e8
db-44fe-441f-a02b-998c767969dd): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 18 20:49:59 old-k8s-version-959748 kubelet[1382]: E0918 20:49:59.432854    1382 pod_workers.go:191] Error syncing pod 3996e8db-44fe-441f-a02b-998c767969dd ("metrics-server-9975d5f86-jxd9z_kube-system(3996e8db-44fe-441f-a02b-998c767969dd)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 18 20:50:01 old-k8s-version-959748 kubelet[1382]: E0918 20:50:01.412325    1382 pod_workers.go:191] Error syncing pod 220c2c35-5f0c-4ba9-a519-8d27c889f472 ("dashboard-metrics-scraper-8d5bb5db8-5x67x_kubernetes-dashboard(220c2c35-5f0c-4ba9-a519-8d27c889f472)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [57496d116106] <==
	2024/09/18 20:44:36 Using namespace: kubernetes-dashboard
	2024/09/18 20:44:36 Using in-cluster config to connect to apiserver
	2024/09/18 20:44:36 Using secret token for csrf signing
	2024/09/18 20:44:36 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/18 20:44:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/18 20:44:37 Successful initial request to the apiserver, version: v1.20.0
	2024/09/18 20:44:37 Generating JWE encryption key
	2024/09/18 20:44:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/18 20:44:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/18 20:44:37 Initializing JWE encryption key from synchronized object
	2024/09/18 20:44:37 Creating in-cluster Sidecar client
	2024/09/18 20:44:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 20:44:37 Serving insecurely on HTTP port: 9090
	2024/09/18 20:45:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 20:45:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 20:46:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 20:46:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 20:47:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 20:47:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 20:48:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 20:48:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 20:49:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 20:49:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/18 20:44:36 Starting overwatch
	
	
	==> storage-provisioner [d98463859c48] <==
	I0918 20:44:55.624429       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 20:44:55.661577       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 20:44:55.661624       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 20:45:13.179222       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 20:45:13.181500       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-959748_526393d7-f506-4a24-b3a8-de438b2be304!
	I0918 20:45:13.182059       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"05808235-1e61-4f06-be63-d6ecd22688d3", APIVersion:"v1", ResourceVersion:"803", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-959748_526393d7-f506-4a24-b3a8-de438b2be304 became leader
	I0918 20:45:13.282128       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-959748_526393d7-f506-4a24-b3a8-de438b2be304!
	
	
	==> storage-provisioner [ebd705b772fc] <==
	I0918 20:44:13.800212       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0918 20:44:43.804587       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-959748 -n old-k8s-version-959748
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-959748 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-jxd9z dashboard-metrics-scraper-8d5bb5db8-5x67x
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-959748 describe pod metrics-server-9975d5f86-jxd9z dashboard-metrics-scraper-8d5bb5db8-5x67x
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-959748 describe pod metrics-server-9975d5f86-jxd9z dashboard-metrics-scraper-8d5bb5db8-5x67x: exit status 1 (124.962131ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-jxd9z" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-5x67x" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-959748 describe pod metrics-server-9975d5f86-jxd9z dashboard-metrics-scraper-8d5bb5db8-5x67x: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.04s)

                                                
                                    

Test pass (316/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.19
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.12
9 TestDownloadOnly/v1.20.0/DeleteAll 0.34
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.1/json-events 5.07
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
22 TestOffline 87.03
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 221.36
29 TestAddons/serial/Volcano 40.28
31 TestAddons/serial/GCPAuth/Namespaces 0.17
34 TestAddons/parallel/Ingress 21.59
35 TestAddons/parallel/InspektorGadget 11.8
36 TestAddons/parallel/MetricsServer 6.74
39 TestAddons/parallel/CSI 41.95
40 TestAddons/parallel/Headlamp 16.73
41 TestAddons/parallel/CloudSpanner 6.53
42 TestAddons/parallel/LocalPath 53.41
43 TestAddons/parallel/NvidiaDevicePlugin 6.47
44 TestAddons/parallel/Yakd 10.71
45 TestAddons/StoppedEnableDisable 11.21
46 TestCertOptions 36.31
47 TestCertExpiration 245.73
48 TestDockerFlags 40.91
49 TestForceSystemdFlag 45.4
50 TestForceSystemdEnv 45.85
56 TestErrorSpam/setup 31.75
57 TestErrorSpam/start 0.76
58 TestErrorSpam/status 1.21
59 TestErrorSpam/pause 1.42
60 TestErrorSpam/unpause 1.71
61 TestErrorSpam/stop 11.17
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 79.11
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.79
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.2
73 TestFunctional/serial/CacheCmd/cache/add_local 0.94
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.13
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
81 TestFunctional/serial/ExtraConfig 43.69
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.18
84 TestFunctional/serial/LogsFileCmd 1.22
85 TestFunctional/serial/InvalidService 5.32
87 TestFunctional/parallel/ConfigCmd 0.57
88 TestFunctional/parallel/DashboardCmd 11.29
89 TestFunctional/parallel/DryRun 0.49
90 TestFunctional/parallel/InternationalLanguage 0.25
91 TestFunctional/parallel/StatusCmd 1.31
95 TestFunctional/parallel/ServiceCmdConnect 11.74
96 TestFunctional/parallel/AddonsCmd 0.19
97 TestFunctional/parallel/PersistentVolumeClaim 27.35
99 TestFunctional/parallel/SSHCmd 0.73
100 TestFunctional/parallel/CpCmd 2.69
102 TestFunctional/parallel/FileSync 0.29
103 TestFunctional/parallel/CertSync 2.22
107 TestFunctional/parallel/NodeLabels 0.16
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
111 TestFunctional/parallel/License 0.36
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.8
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.56
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.28
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.62
125 TestFunctional/parallel/ServiceCmd/List 0.74
126 TestFunctional/parallel/ProfileCmd/profile_list 0.63
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
130 TestFunctional/parallel/MountCmd/any-port 8.45
131 TestFunctional/parallel/ServiceCmd/Format 0.55
132 TestFunctional/parallel/ServiceCmd/URL 0.47
134 TestFunctional/parallel/Version/short 0.13
135 TestFunctional/parallel/Version/components 1.23
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.52
141 TestFunctional/parallel/ImageCommands/Setup 0.7
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.94
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.07
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.57
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.47
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
152 TestFunctional/parallel/DockerEnv/bash 1.65
153 TestFunctional/parallel/MountCmd/VerifyCleanup 2.54
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 128.45
161 TestMultiControlPlane/serial/DeployApp 48.59
162 TestMultiControlPlane/serial/PingHostFromPods 1.79
163 TestMultiControlPlane/serial/AddWorkerNode 28.65
164 TestMultiControlPlane/serial/NodeLabels 0.11
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.14
166 TestMultiControlPlane/serial/CopyFile 20.4
167 TestMultiControlPlane/serial/StopSecondaryNode 11.89
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
169 TestMultiControlPlane/serial/RestartSecondaryNode 68.27
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.09
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 167.69
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.55
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
174 TestMultiControlPlane/serial/StopCluster 32.88
175 TestMultiControlPlane/serial/RestartCluster 160.76
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.83
177 TestMultiControlPlane/serial/AddSecondaryNode 50.92
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
181 TestImageBuild/serial/Setup 32.35
182 TestImageBuild/serial/NormalBuild 2.02
183 TestImageBuild/serial/BuildWithBuildArg 1.01
184 TestImageBuild/serial/BuildWithDockerIgnore 1.04
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.73
189 TestJSONOutput/start/Command 74.91
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.64
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.58
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.98
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.23
214 TestKicCustomNetwork/create_custom_network 34.17
215 TestKicCustomNetwork/use_default_bridge_network 34.77
216 TestKicExistingNetwork 34.87
217 TestKicCustomSubnet 32.89
218 TestKicStaticIP 35.17
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 73.87
223 TestMountStart/serial/StartWithMountFirst 15.08
224 TestMountStart/serial/VerifyMountFirst 0.26
225 TestMountStart/serial/StartWithMountSecond 8.46
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.48
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.21
230 TestMountStart/serial/RestartStopped 9.25
231 TestMountStart/serial/VerifyMountPostStop 0.34
234 TestMultiNode/serial/FreshStart2Nodes 83.16
235 TestMultiNode/serial/DeployApp2Nodes 55.18
236 TestMultiNode/serial/PingHostFrom2Pods 1.09
237 TestMultiNode/serial/AddNode 17.89
238 TestMultiNode/serial/MultiNodeLabels 0.1
239 TestMultiNode/serial/ProfileList 0.71
240 TestMultiNode/serial/CopyFile 10.63
241 TestMultiNode/serial/StopNode 2.31
242 TestMultiNode/serial/StartAfterStop 11.76
243 TestMultiNode/serial/RestartKeepsNodes 98.32
244 TestMultiNode/serial/DeleteNode 5.74
245 TestMultiNode/serial/StopMultiNode 21.6
246 TestMultiNode/serial/RestartMultiNode 58.05
247 TestMultiNode/serial/ValidateNameConflict 39.11
252 TestPreload 104.36
254 TestScheduledStopUnix 106.27
255 TestSkaffold 121.67
257 TestInsufficientStorage 11.97
258 TestRunningBinaryUpgrade 102.16
260 TestKubernetesUpgrade 393.27
261 TestMissingContainerUpgrade 179.09
263 TestPause/serial/Start 57.22
264 TestPause/serial/SecondStartNoReconfiguration 32.96
265 TestPause/serial/Pause 0.78
266 TestPause/serial/VerifyStatus 0.5
267 TestPause/serial/Unpause 0.65
268 TestPause/serial/PauseAgain 1.08
269 TestPause/serial/DeletePaused 2.98
270 TestPause/serial/VerifyDeletedResources 0.17
271 TestStoppedBinaryUpgrade/Setup 1.22
272 TestStoppedBinaryUpgrade/Upgrade 92.52
273 TestStoppedBinaryUpgrade/MinikubeLogs 1.37
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
283 TestNoKubernetes/serial/StartWithK8s 43.71
295 TestNoKubernetes/serial/StartWithStopK8s 18.17
296 TestNoKubernetes/serial/Start 12.33
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
298 TestNoKubernetes/serial/ProfileList 0.89
299 TestNoKubernetes/serial/Stop 2.6
300 TestNoKubernetes/serial/StartNoArgs 9.49
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
303 TestStartStop/group/old-k8s-version/serial/FirstStart 168.81
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 76.42
306 TestStartStop/group/old-k8s-version/serial/DeployApp 10.96
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.64
308 TestStartStop/group/old-k8s-version/serial/Stop 11.38
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.49
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.61
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.09
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.72
316 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
318 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
319 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.99
321 TestStartStop/group/embed-certs/serial/FirstStart 53.14
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.13
324 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
325 TestStartStop/group/old-k8s-version/serial/Pause 2.99
327 TestStartStop/group/no-preload/serial/FirstStart 90.14
328 TestStartStop/group/embed-certs/serial/DeployApp 10.62
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.53
330 TestStartStop/group/embed-certs/serial/Stop 11.16
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
332 TestStartStop/group/embed-certs/serial/SecondStart 274.55
333 TestStartStop/group/no-preload/serial/DeployApp 10.39
334 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
335 TestStartStop/group/no-preload/serial/Stop 11.08
336 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
337 TestStartStop/group/no-preload/serial/SecondStart 267.48
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
340 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
341 TestStartStop/group/embed-certs/serial/Pause 2.99
343 TestStartStop/group/newest-cni/serial/FirstStart 39.28
344 TestStartStop/group/newest-cni/serial/DeployApp 0
345 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.1
346 TestStartStop/group/newest-cni/serial/Stop 11.15
347 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
348 TestStartStop/group/newest-cni/serial/SecondStart 19.66
349 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
350 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.14
351 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
354 TestStartStop/group/newest-cni/serial/Pause 3.04
355 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
356 TestStartStop/group/no-preload/serial/Pause 4.42
357 TestNetworkPlugins/group/auto/Start 59.17
358 TestNetworkPlugins/group/kindnet/Start 74.01
359 TestNetworkPlugins/group/auto/KubeletFlags 0.45
360 TestNetworkPlugins/group/auto/NetCatPod 12.42
361 TestNetworkPlugins/group/auto/DNS 0.3
362 TestNetworkPlugins/group/auto/Localhost 0.25
363 TestNetworkPlugins/group/auto/HairPin 0.21
364 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
365 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
366 TestNetworkPlugins/group/kindnet/NetCatPod 12.37
367 TestNetworkPlugins/group/calico/Start 88.51
368 TestNetworkPlugins/group/kindnet/DNS 0.22
369 TestNetworkPlugins/group/kindnet/Localhost 0.16
370 TestNetworkPlugins/group/kindnet/HairPin 0.2
371 TestNetworkPlugins/group/custom-flannel/Start 65.52
372 TestNetworkPlugins/group/calico/ControllerPod 6.01
373 TestNetworkPlugins/group/calico/KubeletFlags 0.31
374 TestNetworkPlugins/group/calico/NetCatPod 12.29
375 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
376 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.32
377 TestNetworkPlugins/group/custom-flannel/DNS 0.31
378 TestNetworkPlugins/group/calico/DNS 0.37
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
380 TestNetworkPlugins/group/calico/Localhost 0.28
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.26
382 TestNetworkPlugins/group/calico/HairPin 0.22
383 TestNetworkPlugins/group/false/Start 55.28
384 TestNetworkPlugins/group/enable-default-cni/Start 52.43
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.33
386 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.3
387 TestNetworkPlugins/group/false/KubeletFlags 0.49
388 TestNetworkPlugins/group/false/NetCatPod 12.36
389 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
390 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
391 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
392 TestNetworkPlugins/group/false/DNS 0.2
393 TestNetworkPlugins/group/false/Localhost 0.19
394 TestNetworkPlugins/group/false/HairPin 0.18
395 TestNetworkPlugins/group/flannel/Start 62.95
396 TestNetworkPlugins/group/bridge/Start 80.6
397 TestNetworkPlugins/group/flannel/ControllerPod 6.01
398 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
399 TestNetworkPlugins/group/flannel/NetCatPod 11.28
400 TestNetworkPlugins/group/flannel/DNS 0.21
401 TestNetworkPlugins/group/flannel/Localhost 0.19
402 TestNetworkPlugins/group/flannel/HairPin 0.17
403 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
404 TestNetworkPlugins/group/bridge/NetCatPod 12.32
405 TestNetworkPlugins/group/bridge/DNS 0.24
406 TestNetworkPlugins/group/bridge/Localhost 0.26
407 TestNetworkPlugins/group/bridge/HairPin 0.23
408 TestNetworkPlugins/group/kubenet/Start 74.69
409 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
410 TestNetworkPlugins/group/kubenet/NetCatPod 10.25
411 TestNetworkPlugins/group/kubenet/DNS 0.18
412 TestNetworkPlugins/group/kubenet/Localhost 0.16
413 TestNetworkPlugins/group/kubenet/HairPin 0.21
x
+
TestDownloadOnly/v1.20.0/json-events (10.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-843008 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-843008 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (10.193641981s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0918 19:38:03.588062    7565 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0918 19:38:03.588143    7565 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-843008
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-843008: exit status 85 (115.243081ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-843008 | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |          |
	|         | -p download-only-843008        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:37:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:37:53.438652    7570 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:37:53.438795    7570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:53.438805    7570 out.go:358] Setting ErrFile to fd 2...
	I0918 19:37:53.438810    7570 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:53.439052    7570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
	W0918 19:37:53.439177    7570 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19667-2236/.minikube/config/config.json: open /home/jenkins/minikube-integration/19667-2236/.minikube/config/config.json: no such file or directory
	I0918 19:37:53.439620    7570 out.go:352] Setting JSON to true
	I0918 19:37:53.440431    7570 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1221,"bootTime":1726687053,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0918 19:37:53.440508    7570 start.go:139] virtualization:  
	I0918 19:37:53.444381    7570 out.go:97] [download-only-843008] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0918 19:37:53.444603    7570 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 19:37:53.444670    7570 notify.go:220] Checking for updates...
	I0918 19:37:53.447364    7570 out.go:169] MINIKUBE_LOCATION=19667
	I0918 19:37:53.450413    7570 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:37:53.453027    7570 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	I0918 19:37:53.455500    7570 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	I0918 19:37:53.458012    7570 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0918 19:37:53.463201    7570 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 19:37:53.463496    7570 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:37:53.490663    7570 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 19:37:53.490771    7570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:37:53.893524    7570 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-18 19:37:53.883516694 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 19:37:53.893634    7570 docker.go:318] overlay module found
	I0918 19:37:53.896426    7570 out.go:97] Using the docker driver based on user configuration
	I0918 19:37:53.896462    7570 start.go:297] selected driver: docker
	I0918 19:37:53.896471    7570 start.go:901] validating driver "docker" against <nil>
	I0918 19:37:53.896600    7570 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:37:53.959455    7570 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-18 19:37:53.949987076 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 19:37:53.959701    7570 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:37:53.960030    7570 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0918 19:37:53.960208    7570 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 19:37:53.963097    7570 out.go:169] Using Docker driver with root privileges
	I0918 19:37:53.965745    7570 cni.go:84] Creating CNI manager for ""
	I0918 19:37:53.965816    7570 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0918 19:37:53.965903    7570 start.go:340] cluster config:
	{Name:download-only-843008 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-843008 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:37:53.968638    7570 out.go:97] Starting "download-only-843008" primary control-plane node in "download-only-843008" cluster
	I0918 19:37:53.968670    7570 cache.go:121] Beginning downloading kic base image for docker with docker
	I0918 19:37:53.971333    7570 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0918 19:37:53.971369    7570 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 19:37:53.971467    7570 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0918 19:37:53.988039    7570 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 19:37:53.988252    7570 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 19:37:53.988363    7570 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 19:37:54.049607    7570 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0918 19:37:54.049636    7570 cache.go:56] Caching tarball of preloaded images
	I0918 19:37:54.049795    7570 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 19:37:54.052922    7570 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0918 19:37:54.052968    7570 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 19:37:54.133852    7570 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0918 19:37:57.937127    7570 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 19:37:57.937223    7570 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0918 19:37:59.004148    7570 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0918 19:37:59.004546    7570 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/download-only-843008/config.json ...
	I0918 19:37:59.004578    7570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/download-only-843008/config.json: {Name:mk803e23ca08ee627da288d6af1218377e61e56f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:37:59.004753    7570 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0918 19:37:59.004915    7570 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19667-2236/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-843008 host does not exist
	  To start a cluster, run: "minikube start -p download-only-843008"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-843008
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-593891 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-593891 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.074225545s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0918 19:38:09.326172    7565 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0918 19:38:09.326211    7565 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-593891
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-593891: exit status 85 (72.851541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-843008 | jenkins | v1.34.0 | 18 Sep 24 19:37 UTC |                     |
	|         | -p download-only-843008        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| delete  | -p download-only-843008        | download-only-843008 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC | 18 Sep 24 19:38 UTC |
	| start   | -o=json --download-only        | download-only-593891 | jenkins | v1.34.0 | 18 Sep 24 19:38 UTC |                     |
	|         | -p download-only-593891        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/18 19:38:04
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:38:04.296623    7763 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:38:04.296822    7763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:04.296852    7763 out.go:358] Setting ErrFile to fd 2...
	I0918 19:38:04.296872    7763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:04.297678    7763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
	I0918 19:38:04.298169    7763 out.go:352] Setting JSON to true
	I0918 19:38:04.298992    7763 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":1232,"bootTime":1726687053,"procs":147,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0918 19:38:04.299092    7763 start.go:139] virtualization:  
	I0918 19:38:04.330340    7763 out.go:97] [download-only-593891] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0918 19:38:04.330538    7763 notify.go:220] Checking for updates...
	I0918 19:38:04.359016    7763 out.go:169] MINIKUBE_LOCATION=19667
	I0918 19:38:04.378911    7763 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:04.409350    7763 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	I0918 19:38:04.438498    7763 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	I0918 19:38:04.468892    7763 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0918 19:38:04.532450    7763 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 19:38:04.532740    7763 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:38:04.553833    7763 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 19:38:04.553953    7763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:38:04.644453    7763 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-18 19:38:04.633827448 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 19:38:04.644570    7763 docker.go:318] overlay module found
	I0918 19:38:04.654739    7763 out.go:97] Using the docker driver based on user configuration
	I0918 19:38:04.654780    7763 start.go:297] selected driver: docker
	I0918 19:38:04.654788    7763 start.go:901] validating driver "docker" against <nil>
	I0918 19:38:04.654912    7763 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:38:04.712917    7763 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-18 19:38:04.702578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:
[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 19:38:04.713145    7763 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0918 19:38:04.713442    7763 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0918 19:38:04.713599    7763 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 19:38:04.723939    7763 out.go:169] Using Docker driver with root privileges
	I0918 19:38:04.733363    7763 cni.go:84] Creating CNI manager for ""
	I0918 19:38:04.733442    7763 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0918 19:38:04.733453    7763 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0918 19:38:04.733532    7763 start.go:340] cluster config:
	{Name:download-only-593891 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-593891 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:38:04.740969    7763 out.go:97] Starting "download-only-593891" primary control-plane node in "download-only-593891" cluster
	I0918 19:38:04.741009    7763 cache.go:121] Beginning downloading kic base image for docker with docker
	I0918 19:38:04.744618    7763 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0918 19:38:04.744657    7763 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:04.744758    7763 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0918 19:38:04.762428    7763 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0918 19:38:04.762572    7763 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0918 19:38:04.762592    7763 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0918 19:38:04.762597    7763 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0918 19:38:04.762621    7763 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0918 19:38:04.800707    7763 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 19:38:04.800733    7763 cache.go:56] Caching tarball of preloaded images
	I0918 19:38:04.800889    7763 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:04.807223    7763 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0918 19:38:04.807275    7763 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0918 19:38:04.896081    7763 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0918 19:38:07.640956    7763 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0918 19:38:07.641106    7763 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19667-2236/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0918 19:38:08.621986    7763 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0918 19:38:08.622546    7763 profile.go:143] Saving config to /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/download-only-593891/config.json ...
	I0918 19:38:08.622591    7763 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/download-only-593891/config.json: {Name:mk9b61ca83d3cda3be9e66e59cd121240c46e34d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:38:08.622912    7763 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0918 19:38:08.623184    7763 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19667-2236/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-593891 host does not exist
	  To start a cluster, run: "minikube start -p download-only-593891"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-593891
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0918 19:38:10.537336    7565 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-976038 --alsologtostderr --binary-mirror http://127.0.0.1:41665 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-976038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-976038
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (87.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-638862 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-638862 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m24.874754054s)
helpers_test.go:175: Cleaning up "offline-docker-638862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-638862
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-638862: (2.152680173s)
--- PASS: TestOffline (87.03s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-923322
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-923322: exit status 85 (70.260709ms)

                                                
                                                
-- stdout --
	* Profile "addons-923322" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-923322"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-923322
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-923322: exit status 85 (75.803476ms)

                                                
                                                
-- stdout --
	* Profile "addons-923322" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-923322"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (221.36s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-923322 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-923322 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m41.361706388s)
--- PASS: TestAddons/Setup (221.36s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 50.591799ms
addons_test.go:897: volcano-scheduler stabilized in 51.072223ms
addons_test.go:913: volcano-controller stabilized in 51.307077ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-h2mnb" [1ad61020-c38d-4890-9a46-18bca28a746f] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003830296s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-2twzx" [cd12d938-e043-42ce-b5b9-55b39ce7ea53] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00423168s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-2nqg9" [79e4dff5-f4ce-4ac5-bb4e-a734352e7dd3] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004299382s
addons_test.go:932: (dbg) Run:  kubectl --context addons-923322 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-923322 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-923322 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f862cee2-f91b-434d-b74d-db1601f69d31] Pending
helpers_test.go:344: "test-job-nginx-0" [f862cee2-f91b-434d-b74d-db1601f69d31] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [f862cee2-f91b-434d-b74d-db1601f69d31] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.00676143s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-923322 addons disable volcano --alsologtostderr -v=1: (10.524198816s)
--- PASS: TestAddons/serial/Volcano (40.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-923322 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-923322 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-923322 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-923322 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-923322 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [111c0619-9f48-4aed-9eb7-4636b7c46e82] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [111c0619-9f48-4aed-9eb7-4636b7c46e82] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003643106s
I0918 19:51:47.605584    7565 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-923322 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-923322 addons disable ingress-dns --alsologtostderr -v=1: (1.609307063s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-923322 addons disable ingress --alsologtostderr -v=1: (7.808888453s)
--- PASS: TestAddons/parallel/Ingress (21.59s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.8s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-prz2t" [cffcd439-fa27-4718-a834-9509d4c523dd] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004710576s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-923322
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-923322: (5.795760301s)
--- PASS: TestAddons/parallel/InspektorGadget (11.80s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.475029ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-hwphq" [b9ffea56-bc3b-4b0e-b302-9726b4125780] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00351659s
addons_test.go:417: (dbg) Run:  kubectl --context addons-923322 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.74s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 11.254823ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-923322 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-923322 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a7d1b695-49c4-4152-95d3-b0c25baffc64] Pending
helpers_test.go:344: "task-pv-pod" [a7d1b695-49c4-4152-95d3-b0c25baffc64] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a7d1b695-49c4-4152-95d3-b0c25baffc64] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004112698s
addons_test.go:590: (dbg) Run:  kubectl --context addons-923322 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-923322 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-923322 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-923322 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-923322 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-923322 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-923322 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [accf78fa-9412-4d40-823d-3092bd9d0bb3] Pending
helpers_test.go:344: "task-pv-pod-restore" [accf78fa-9412-4d40-823d-3092bd9d0bb3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [accf78fa-9412-4d40-823d-3092bd9d0bb3] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00365364s
addons_test.go:632: (dbg) Run:  kubectl --context addons-923322 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-923322 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-923322 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-923322 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.929130219s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-923322 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-hr94d" [4e19dcaa-56f8-4706-8aa4-dbd5a5124aa9] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-hr94d" [4e19dcaa-56f8-4706-8aa4-dbd5a5124aa9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-hr94d" [4e19dcaa-56f8-4706-8aa4-dbd5a5124aa9] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003349161s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-923322 addons disable headlamp --alsologtostderr -v=1: (5.793028715s)
--- PASS: TestAddons/parallel/Headlamp (16.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-pkc8f" [d9212048-4b8e-40de-99be-bc0cbabc4337] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003716943s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-923322
--- PASS: TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-923322 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-923322 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-923322 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [057845ee-b96b-4094-bbbd-88a110c63ba9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [057845ee-b96b-4094-bbbd-88a110c63ba9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [057845ee-b96b-4094-bbbd-88a110c63ba9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003378241s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-923322 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 ssh "cat /opt/local-path-provisioner/pvc-ba5a34bf-fbd9-4670-bdf7-6d5c799eb71d_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-923322 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-923322 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-923322 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.264086707s)
--- PASS: TestAddons/parallel/LocalPath (53.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-cddcv" [b574c98b-2a15-4629-9c56-0509a4565cf5] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003624954s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-923322
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4wvqd" [45bbc339-0422-4b5b-ad98-d315ab67d5c6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005324596s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-923322 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-923322 addons disable yakd --alsologtostderr -v=1: (5.699821119s)
--- PASS: TestAddons/parallel/Yakd (10.71s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-923322
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-923322: (10.944372427s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-923322
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-923322
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-923322
--- PASS: TestAddons/StoppedEnableDisable (11.21s)

                                                
                                    
x
+
TestCertOptions (36.31s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-886610 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-886610 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (33.554665079s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-886610 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-886610 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-886610 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-886610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-886610
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-886610: (2.04829215s)
--- PASS: TestCertOptions (36.31s)

                                                
                                    
x
+
TestCertExpiration (245.73s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-179155 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-179155 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (38.256036191s)
E0918 20:39:42.571154    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-179155 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-179155 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (25.260173823s)
helpers_test.go:175: Cleaning up "cert-expiration-179155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-179155
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-179155: (2.208773492s)
--- PASS: TestCertExpiration (245.73s)

                                                
                                    
x
+
TestDockerFlags (40.91s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-948087 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-948087 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.500587479s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-948087 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-948087 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-948087" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-948087
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-948087: (2.774608336s)
--- PASS: TestDockerFlags (40.91s)

                                                
                                    
x
+
TestForceSystemdFlag (45.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-044949 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-044949 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.604856728s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-044949 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-044949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-044949
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-044949: (2.31621328s)
--- PASS: TestForceSystemdFlag (45.40s)

                                                
                                    
x
+
TestForceSystemdEnv (45.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-984164 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-984164 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.072839419s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-984164 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-984164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-984164
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-984164: (2.241526543s)
--- PASS: TestForceSystemdEnv (45.85s)

                                                
                                    
x
+
TestErrorSpam/setup (31.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-070502 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-070502 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-070502 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-070502 --driver=docker  --container-runtime=docker: (31.751482352s)
--- PASS: TestErrorSpam/setup (31.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 status
--- PASS: TestErrorSpam/status (1.21s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 unpause
--- PASS: TestErrorSpam/unpause (1.71s)

                                                
                                    
x
+
TestErrorSpam/stop (11.17s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 stop: (10.985077881s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-070502 --log_dir /tmp/nospam-070502 stop
--- PASS: TestErrorSpam/stop (11.17s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19667-2236/.minikube/files/etc/test/nested/copy/7565/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.11s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-325340 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-325340 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m19.106998121s)
--- PASS: TestFunctional/serial/StartWithProxy (79.11s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.79s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0918 19:55:08.665413    7565 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-325340 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-325340 --alsologtostderr -v=8: (36.782567651s)
functional_test.go:663: soft start took 36.786421576s for "functional-325340" cluster.
I0918 19:55:45.448358    7565 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (36.79s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-325340 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-325340 cache add registry.k8s.io/pause:3.1: (1.075643656s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-325340 cache add registry.k8s.io/pause:3.3: (1.19889616s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-325340 /tmp/TestFunctionalserialCacheCmdcacheadd_local2597805901/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 cache add minikube-local-cache-test:functional-325340
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 cache delete minikube-local-cache-test:functional-325340
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-325340
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.577534ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 kubectl -- --context functional-325340 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-325340 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.69s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-325340 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-325340 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.686116762s)
functional_test.go:761: restart took 43.686217993s for "functional-325340" cluster.
I0918 19:56:36.014562    7565 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (43.69s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-325340 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-325340 logs: (1.183554403s)
--- PASS: TestFunctional/serial/LogsCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 logs --file /tmp/TestFunctionalserialLogsFileCmd2059092814/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-325340 logs --file /tmp/TestFunctionalserialLogsFileCmd2059092814/001/logs.txt: (1.222368963s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-325340 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-325340
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-325340: exit status 115 (803.504841ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31633 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-325340 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-325340 delete -f testdata/invalidsvc.yaml: (1.217385824s)
--- PASS: TestFunctional/serial/InvalidService (5.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 config get cpus: exit status 14 (90.61867ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 config get cpus: exit status 14 (77.080627ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-325340 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-325340 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 49562: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-325340 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-325340 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (219.815418ms)

                                                
                                                
-- stdout --
	* [functional-325340] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:57:18.507039   49224 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:57:18.507292   49224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:18.507324   49224 out.go:358] Setting ErrFile to fd 2...
	I0918 19:57:18.507343   49224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:18.507706   49224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
	I0918 19:57:18.508200   49224 out.go:352] Setting JSON to false
	I0918 19:57:18.509523   49224 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2386,"bootTime":1726687053,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0918 19:57:18.509651   49224 start.go:139] virtualization:  
	I0918 19:57:18.513239   49224 out.go:177] * [functional-325340] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0918 19:57:18.516262   49224 notify.go:220] Checking for updates...
	I0918 19:57:18.516827   49224 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:57:18.520185   49224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:57:18.522944   49224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	I0918 19:57:18.525715   49224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	I0918 19:57:18.528477   49224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 19:57:18.531497   49224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:57:18.534846   49224 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:57:18.535475   49224 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:57:18.563063   49224 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 19:57:18.563205   49224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:57:18.634768   49224 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-18 19:57:18.62470811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 19:57:18.634874   49224 docker.go:318] overlay module found
	I0918 19:57:18.637730   49224 out.go:177] * Using the docker driver based on existing profile
	I0918 19:57:18.640369   49224 start.go:297] selected driver: docker
	I0918 19:57:18.640392   49224 start.go:901] validating driver "docker" against &{Name:functional-325340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-325340 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:57:18.640561   49224 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:57:18.643919   49224 out.go:201] 
	W0918 19:57:18.646629   49224 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0918 19:57:18.649294   49224 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-325340 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-325340 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-325340 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (254.456919ms)

                                                
                                                
-- stdout --
	* [functional-325340] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:57:18.248292   49140 out.go:345] Setting OutFile to fd 1 ...
	I0918 19:57:18.248535   49140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:18.248571   49140 out.go:358] Setting ErrFile to fd 2...
	I0918 19:57:18.248597   49140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 19:57:18.249504   49140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
	I0918 19:57:18.250157   49140 out.go:352] Setting JSON to false
	I0918 19:57:18.251424   49140 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":2386,"bootTime":1726687053,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0918 19:57:18.251557   49140 start.go:139] virtualization:  
	I0918 19:57:18.257005   49140 out.go:177] * [functional-325340] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0918 19:57:18.261125   49140 out.go:177]   - MINIKUBE_LOCATION=19667
	I0918 19:57:18.261177   49140 notify.go:220] Checking for updates...
	I0918 19:57:18.264622   49140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:57:18.267372   49140 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	I0918 19:57:18.269326   49140 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	I0918 19:57:18.272234   49140 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 19:57:18.274469   49140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:57:18.277828   49140 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 19:57:18.278405   49140 driver.go:394] Setting default libvirt URI to qemu:///system
	I0918 19:57:18.308951   49140 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0918 19:57:18.309090   49140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:57:18.412419   49140 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-18 19:57:18.401224132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 19:57:18.412644   49140 docker.go:318] overlay module found
	I0918 19:57:18.418425   49140 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0918 19:57:18.421182   49140 start.go:297] selected driver: docker
	I0918 19:57:18.421209   49140 start.go:901] validating driver "docker" against &{Name:functional-325340 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-325340 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0918 19:57:18.421332   49140 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:57:18.424450   49140 out.go:201] 
	W0918 19:57:18.427215   49140 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0918 19:57:18.429729   49140 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-325340 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-325340 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-fxv2n" [7db0b1dd-a209-4d7a-a795-c8fb9d99656d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0918 19:56:57.700234    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-65d86f57f4-fxv2n" [7db0b1dd-a209-4d7a-a795-c8fb9d99656d] Running
E0918 19:57:02.822124    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004051478s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31303
functional_test.go:1675: http://192.168.49.2:31303: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-fxv2n

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31303
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [57b80fb7-7951-4da2-8e14-4468a9e27209] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003440906s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-325340 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-325340 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-325340 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-325340 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [833474fb-ed0f-4797-8687-043c58d28c0a] Pending
E0918 19:56:52.570380    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:52.576647    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:52.587965    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:52.609290    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:52.650645    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:52.731975    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:52.893467    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [833474fb-ed0f-4797-8687-043c58d28c0a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0918 19:56:53.214771    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:53.856091    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:56:55.137864    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [833474fb-ed0f-4797-8687-043c58d28c0a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003559845s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-325340 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-325340 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-325340 delete -f testdata/storage-provisioner/pod.yaml: (1.304434446s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-325340 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [db0310f4-a483-4d34-8ce1-aab88fe6d417] Pending
helpers_test.go:344: "sp-pod" [db0310f4-a483-4d34-8ce1-aab88fe6d417] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [db0310f4-a483-4d34-8ce1-aab88fe6d417] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004467289s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-325340 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.35s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh -n functional-325340 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 cp functional-325340:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1673492738/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh -n functional-325340 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh -n functional-325340 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7565/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "sudo cat /etc/test/nested/copy/7565/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7565.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "sudo cat /etc/ssl/certs/7565.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7565.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "sudo cat /usr/share/ca-certificates/7565.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75652.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "sudo cat /etc/ssl/certs/75652.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75652.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "sudo cat /usr/share/ca-certificates/75652.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-325340 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "sudo systemctl is-active crio": exit status 1 (273.166005ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-325340 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-325340 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-325340 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-325340 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 46367: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-325340 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-325340 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6ba420a5-4d3c-4a2a-a053-0116836562a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6ba420a5-4d3c-4a2a-a053-0116836562a4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003726942s
I0918 19:56:55.510714    7565 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-325340 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.64.205 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-325340 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-325340 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-325340 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-jkgwp" [0f82fa07-0a83-4e79-adbb-ecc02bb44e20] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-jkgwp" [0f82fa07-0a83-4e79-adbb-ecc02bb44e20] Running
E0918 19:57:13.063870    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004979306s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "476.18302ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "150.007559ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 service list -o json
functional_test.go:1494: Took "677.409446ms" to run "out/minikube-linux-arm64 -p functional-325340 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "431.155301ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "71.226039ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32292
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdany-port3104777006/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726689435649496133" to /tmp/TestFunctionalparallelMountCmdany-port3104777006/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726689435649496133" to /tmp/TestFunctionalparallelMountCmdany-port3104777006/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726689435649496133" to /tmp/TestFunctionalparallelMountCmdany-port3104777006/001/test-1726689435649496133
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (488.295149ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:57:16.138716    7565 retry.go:31] will retry after 404.251088ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 18 19:57 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 18 19:57 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 18 19:57 test-1726689435649496133
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh cat /mount-9p/test-1726689435649496133
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-325340 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [72dd3207-929c-451d-afd3-9f30611e9b26] Pending
helpers_test.go:344: "busybox-mount" [72dd3207-929c-451d-afd3-9f30611e9b26] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [72dd3207-929c-451d-afd3-9f30611e9b26] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [72dd3207-929c-451d-afd3-9f30611e9b26] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00455513s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-325340 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdany-port3104777006/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32292
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-325340 version -o=json --components: (1.229230963s)
--- PASS: TestFunctional/parallel/Version/components (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-325340 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-325340
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-325340
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-325340 image ls --format short --alsologtostderr:
I0918 19:57:40.681251   52516 out.go:345] Setting OutFile to fd 1 ...
I0918 19:57:40.681524   52516 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:40.681552   52516 out.go:358] Setting ErrFile to fd 2...
I0918 19:57:40.681580   52516 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:40.681853   52516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
I0918 19:57:40.682564   52516 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:57:40.682758   52516 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:57:40.683274   52516 cli_runner.go:164] Run: docker container inspect functional-325340 --format={{.State.Status}}
I0918 19:57:40.703728   52516 ssh_runner.go:195] Run: systemctl --version
I0918 19:57:40.703780   52516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-325340
I0918 19:57:40.724236   52516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/functional-325340/id_rsa Username:docker}
I0918 19:57:40.836397   52516 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-325340 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/kicbase/echo-server               | functional-325340 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-325340 | ad645171eb1ac | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-325340 image ls --format table --alsologtostderr:
I0918 19:57:41.675697   52794 out.go:345] Setting OutFile to fd 1 ...
I0918 19:57:41.675932   52794 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:41.675953   52794 out.go:358] Setting ErrFile to fd 2...
I0918 19:57:41.675973   52794 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:41.676259   52794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
I0918 19:57:41.676955   52794 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:57:41.677110   52794 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:57:41.677642   52794 cli_runner.go:164] Run: docker container inspect functional-325340 --format={{.State.Status}}
I0918 19:57:41.697163   52794 ssh_runner.go:195] Run: systemctl --version
I0918 19:57:41.697224   52794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-325340
I0918 19:57:41.722245   52794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/functional-325340/id_rsa Username:docker}
I0918 19:57:41.827823   52794 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-325340 image ls --format json --alsologtostderr:
[{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-325340"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908d
b6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"ad645171eb1ac04c503743fe84066a88ba981ce174cbe57c7262f40692b729da","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-325340"],"size":"30"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf
1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"siz
e":"139000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-325340 image ls --format json --alsologtostderr:
I0918 19:57:41.417939   52722 out.go:345] Setting OutFile to fd 1 ...
I0918 19:57:41.419616   52722 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:41.419636   52722 out.go:358] Setting ErrFile to fd 2...
I0918 19:57:41.419654   52722 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:41.419951   52722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
I0918 19:57:41.421223   52722 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:57:41.421362   52722 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:57:41.421867   52722 cli_runner.go:164] Run: docker container inspect functional-325340 --format={{.State.Status}}
I0918 19:57:41.444264   52722 ssh_runner.go:195] Run: systemctl --version
I0918 19:57:41.444318   52722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-325340
I0918 19:57:41.465008   52722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/functional-325340/id_rsa Username:docker}
I0918 19:57:41.568810   52722 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-325340 image ls --format yaml --alsologtostderr:
- id: ad645171eb1ac04c503743fe84066a88ba981ce174cbe57c7262f40692b729da
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-325340
size: "30"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-325340
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-325340 image ls --format yaml --alsologtostderr:
I0918 19:57:41.148244   52652 out.go:345] Setting OutFile to fd 1 ...
I0918 19:57:41.148462   52652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:41.148489   52652 out.go:358] Setting ErrFile to fd 2...
I0918 19:57:41.148515   52652 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:41.148795   52652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
I0918 19:57:41.149669   52652 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:57:41.149867   52652 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:57:41.151128   52652 cli_runner.go:164] Run: docker container inspect functional-325340 --format={{.State.Status}}
I0918 19:57:41.177023   52652 ssh_runner.go:195] Run: systemctl --version
I0918 19:57:41.177080   52652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-325340
I0918 19:57:41.201542   52652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/functional-325340/id_rsa Username:docker}
I0918 19:57:41.304032   52652 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh pgrep buildkitd: exit status 1 (350.917829ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image build -t localhost/my-image:functional-325340 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-325340 image build -t localhost/my-image:functional-325340 testdata/build --alsologtostderr: (2.945191907s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-325340 image build -t localhost/my-image:functional-325340 testdata/build --alsologtostderr:
I0918 19:57:41.312251   52703 out.go:345] Setting OutFile to fd 1 ...
I0918 19:57:41.312495   52703 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:41.312503   52703 out.go:358] Setting ErrFile to fd 2...
I0918 19:57:41.312508   52703 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0918 19:57:41.312844   52703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
I0918 19:57:41.313719   52703 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:57:41.315071   52703 config.go:182] Loaded profile config "functional-325340": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0918 19:57:41.318128   52703 cli_runner.go:164] Run: docker container inspect functional-325340 --format={{.State.Status}}
I0918 19:57:41.354236   52703 ssh_runner.go:195] Run: systemctl --version
I0918 19:57:41.354308   52703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-325340
I0918 19:57:41.405021   52703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/functional-325340/id_rsa Username:docker}
I0918 19:57:41.507922   52703 build_images.go:161] Building image from path: /tmp/build.408129770.tar
I0918 19:57:41.507995   52703 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0918 19:57:41.517407   52703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.408129770.tar
I0918 19:57:41.521645   52703 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.408129770.tar: stat -c "%s %y" /var/lib/minikube/build/build.408129770.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.408129770.tar': No such file or directory
I0918 19:57:41.521697   52703 ssh_runner.go:362] scp /tmp/build.408129770.tar --> /var/lib/minikube/build/build.408129770.tar (3072 bytes)
I0918 19:57:41.549133   52703 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.408129770
I0918 19:57:41.558563   52703 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.408129770 -xf /var/lib/minikube/build/build.408129770.tar
I0918 19:57:41.569372   52703 docker.go:360] Building image: /var/lib/minikube/build/build.408129770
I0918 19:57:41.569428   52703 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-325340 /var/lib/minikube/build/build.408129770
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:b66c6893b68895ad8a45ddce3adbc27a61d58fdff2324447d627e79f67c01dbd done
#8 naming to localhost/my-image:functional-325340 done
#8 DONE 0.0s
I0918 19:57:44.160285   52703 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-325340 /var/lib/minikube/build/build.408129770: (2.590833133s)
I0918 19:57:44.160369   52703 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.408129770
I0918 19:57:44.172382   52703 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.408129770.tar
I0918 19:57:44.183105   52703 build_images.go:217] Built localhost/my-image:functional-325340 from /tmp/build.408129770.tar
I0918 19:57:44.183135   52703 build_images.go:133] succeeded building to: functional-325340
I0918 19:57:44.183140   52703 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-325340
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image load --daemon kicbase/echo-server:functional-325340 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image load --daemon kicbase/echo-server:functional-325340 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
E0918 19:57:33.546213    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-325340
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image load --daemon kicbase/echo-server:functional-325340 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image save kicbase/echo-server:functional-325340 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image rm kicbase/echo-server:functional-325340 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-325340
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 image save --daemon kicbase/echo-server:functional-325340 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-325340
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-325340 docker-env) && out/minikube-linux-arm64 status -p functional-325340"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-325340 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3466099266/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3466099266/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3466099266/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T" /mount1: exit status 1 (877.532927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0918 19:57:37.886607    7565 retry.go:31] will retry after 489.024105ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-325340 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-325340 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3466099266/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3466099266/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-325340 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3466099266/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-325340
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-325340
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-325340
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (128.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-317904 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0918 19:58:14.508205    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 19:59:36.430336    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-317904 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m7.576519043s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (128.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (48.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-317904 -- rollout status deployment/busybox: (6.56730176s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0918 20:00:02.622274    7565 retry.go:31] will retry after 1.033397218s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0918 20:00:03.830096    7565 retry.go:31] will retry after 1.228996273s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0918 20:00:05.247078    7565 retry.go:31] will retry after 2.053299661s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0918 20:00:07.462937    7565 retry.go:31] will retry after 4.511120377s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0918 20:00:12.137332    7565 retry.go:31] will retry after 5.960637352s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0918 20:00:18.284805    7565 retry.go:31] will retry after 6.989320779s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
I0918 20:00:25.426904    7565 retry.go:31] will retry after 15.799960556s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-c2fbk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-nj5jb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-pwkhp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-c2fbk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-nj5jb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-pwkhp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-c2fbk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-nj5jb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-pwkhp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (48.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-c2fbk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-c2fbk -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-nj5jb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-nj5jb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-pwkhp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-317904 -- exec busybox-7dff88458-pwkhp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-317904 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-317904 -v=7 --alsologtostderr: (27.567890882s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr: (1.077737377s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-317904 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.134757591s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp testdata/cp-test.txt ha-317904:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2605485544/001/cp-test_ha-317904.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904:/home/docker/cp-test.txt ha-317904-m02:/home/docker/cp-test_ha-317904_ha-317904-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m02 "sudo cat /home/docker/cp-test_ha-317904_ha-317904-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904:/home/docker/cp-test.txt ha-317904-m03:/home/docker/cp-test_ha-317904_ha-317904-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m03 "sudo cat /home/docker/cp-test_ha-317904_ha-317904-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904:/home/docker/cp-test.txt ha-317904-m04:/home/docker/cp-test_ha-317904_ha-317904-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m04 "sudo cat /home/docker/cp-test_ha-317904_ha-317904-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp testdata/cp-test.txt ha-317904-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2605485544/001/cp-test_ha-317904-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m02:/home/docker/cp-test.txt ha-317904:/home/docker/cp-test_ha-317904-m02_ha-317904.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904 "sudo cat /home/docker/cp-test_ha-317904-m02_ha-317904.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m02:/home/docker/cp-test.txt ha-317904-m03:/home/docker/cp-test_ha-317904-m02_ha-317904-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m03 "sudo cat /home/docker/cp-test_ha-317904-m02_ha-317904-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m02:/home/docker/cp-test.txt ha-317904-m04:/home/docker/cp-test_ha-317904-m02_ha-317904-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m04 "sudo cat /home/docker/cp-test_ha-317904-m02_ha-317904-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp testdata/cp-test.txt ha-317904-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2605485544/001/cp-test_ha-317904-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m03:/home/docker/cp-test.txt ha-317904:/home/docker/cp-test_ha-317904-m03_ha-317904.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904 "sudo cat /home/docker/cp-test_ha-317904-m03_ha-317904.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m03:/home/docker/cp-test.txt ha-317904-m02:/home/docker/cp-test_ha-317904-m03_ha-317904-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m02 "sudo cat /home/docker/cp-test_ha-317904-m03_ha-317904-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m03:/home/docker/cp-test.txt ha-317904-m04:/home/docker/cp-test_ha-317904-m03_ha-317904-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m04 "sudo cat /home/docker/cp-test_ha-317904-m03_ha-317904-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp testdata/cp-test.txt ha-317904-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2605485544/001/cp-test_ha-317904-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m04:/home/docker/cp-test.txt ha-317904:/home/docker/cp-test_ha-317904-m04_ha-317904.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904 "sudo cat /home/docker/cp-test_ha-317904-m04_ha-317904.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m04:/home/docker/cp-test.txt ha-317904-m02:/home/docker/cp-test_ha-317904-m04_ha-317904-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m02 "sudo cat /home/docker/cp-test_ha-317904-m04_ha-317904-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 cp ha-317904-m04:/home/docker/cp-test.txt ha-317904-m03:/home/docker/cp-test_ha-317904-m04_ha-317904-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 ssh -n ha-317904-m03 "sudo cat /home/docker/cp-test_ha-317904-m04_ha-317904-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 node stop m02 -v=7 --alsologtostderr
E0918 20:01:45.952703    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:45.959398    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:45.970803    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:45.992262    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:46.033642    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:46.115051    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:46.280924    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:46.602577    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:47.244516    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-317904 node stop m02 -v=7 --alsologtostderr: (11.064511317s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr: exit status 7 (823.55175ms)

                                                
                                                
-- stdout --
	ha-317904
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-317904-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-317904-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-317904-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:01:47.354806   75616 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:01:47.355048   75616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:01:47.355075   75616 out.go:358] Setting ErrFile to fd 2...
	I0918 20:01:47.355102   75616 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:01:47.355606   75616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
	I0918 20:01:47.355918   75616 out.go:352] Setting JSON to false
	I0918 20:01:47.356002   75616 mustload.go:65] Loading cluster: ha-317904
	I0918 20:01:47.356061   75616 notify.go:220] Checking for updates...
	I0918 20:01:47.356525   75616 config.go:182] Loaded profile config "ha-317904": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:01:47.356547   75616 status.go:174] checking status of ha-317904 ...
	I0918 20:01:47.357200   75616 cli_runner.go:164] Run: docker container inspect ha-317904 --format={{.State.Status}}
	I0918 20:01:47.378952   75616 status.go:364] ha-317904 host status = "Running" (err=<nil>)
	I0918 20:01:47.378976   75616 host.go:66] Checking if "ha-317904" exists ...
	I0918 20:01:47.379304   75616 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-317904
	I0918 20:01:47.406934   75616 host.go:66] Checking if "ha-317904" exists ...
	I0918 20:01:47.407535   75616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:01:47.407650   75616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-317904
	I0918 20:01:47.431906   75616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/ha-317904/id_rsa Username:docker}
	I0918 20:01:47.537176   75616 ssh_runner.go:195] Run: systemctl --version
	I0918 20:01:47.550738   75616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:01:47.563795   75616 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:01:47.636294   75616 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-18 20:01:47.624516318 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:01:47.636904   75616 kubeconfig.go:125] found "ha-317904" server: "https://192.168.49.254:8443"
	I0918 20:01:47.636945   75616 api_server.go:166] Checking apiserver status ...
	I0918 20:01:47.636997   75616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:01:47.650200   75616 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2299/cgroup
	I0918 20:01:47.660730   75616 api_server.go:182] apiserver freezer: "2:freezer:/docker/a5cd3ee1a703669945663d6220c59b0d4122582d7d09a17592e0c5c7440f7aac/kubepods/burstable/podc5ae5abf3884bfa0d6afc17f5d1dfbe4/665da1466841baa39f4ae723598a04b1f2b3b447582102e3a5a3d17ddd612511"
	I0918 20:01:47.660810   75616 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a5cd3ee1a703669945663d6220c59b0d4122582d7d09a17592e0c5c7440f7aac/kubepods/burstable/podc5ae5abf3884bfa0d6afc17f5d1dfbe4/665da1466841baa39f4ae723598a04b1f2b3b447582102e3a5a3d17ddd612511/freezer.state
	I0918 20:01:47.674781   75616 api_server.go:204] freezer state: "THAWED"
	I0918 20:01:47.674812   75616 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0918 20:01:47.682521   75616 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0918 20:01:47.682551   75616 status.go:456] ha-317904 apiserver status = Running (err=<nil>)
	I0918 20:01:47.682562   75616 status.go:176] ha-317904 status: &{Name:ha-317904 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:01:47.682581   75616 status.go:174] checking status of ha-317904-m02 ...
	I0918 20:01:47.682901   75616 cli_runner.go:164] Run: docker container inspect ha-317904-m02 --format={{.State.Status}}
	I0918 20:01:47.700931   75616 status.go:364] ha-317904-m02 host status = "Stopped" (err=<nil>)
	I0918 20:01:47.700952   75616 status.go:377] host is not running, skipping remaining checks
	I0918 20:01:47.700959   75616 status.go:176] ha-317904-m02 status: &{Name:ha-317904-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:01:47.700980   75616 status.go:174] checking status of ha-317904-m03 ...
	I0918 20:01:47.701589   75616 cli_runner.go:164] Run: docker container inspect ha-317904-m03 --format={{.State.Status}}
	I0918 20:01:47.718040   75616 status.go:364] ha-317904-m03 host status = "Running" (err=<nil>)
	I0918 20:01:47.718069   75616 host.go:66] Checking if "ha-317904-m03" exists ...
	I0918 20:01:47.718375   75616 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-317904-m03
	I0918 20:01:47.736659   75616 host.go:66] Checking if "ha-317904-m03" exists ...
	I0918 20:01:47.737048   75616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:01:47.737102   75616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-317904-m03
	I0918 20:01:47.762301   75616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/ha-317904-m03/id_rsa Username:docker}
	I0918 20:01:47.864997   75616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:01:47.880046   75616 kubeconfig.go:125] found "ha-317904" server: "https://192.168.49.254:8443"
	I0918 20:01:47.880121   75616 api_server.go:166] Checking apiserver status ...
	I0918 20:01:47.880180   75616 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:01:47.894558   75616 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0918 20:01:47.906633   75616 api_server.go:182] apiserver freezer: "2:freezer:/docker/0bdb1fb39021ef4c83e764efbf9047df91fb6f33375d0da5d35752d983a392b2/kubepods/burstable/podd7a23f84e290c5b863aae45f228677d4/3e1ac43de43dcc07467d6963ded5a5f6f9d8407104c82beba93ce54ae588ef22"
	I0918 20:01:47.906770   75616 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0bdb1fb39021ef4c83e764efbf9047df91fb6f33375d0da5d35752d983a392b2/kubepods/burstable/podd7a23f84e290c5b863aae45f228677d4/3e1ac43de43dcc07467d6963ded5a5f6f9d8407104c82beba93ce54ae588ef22/freezer.state
	I0918 20:01:47.916720   75616 api_server.go:204] freezer state: "THAWED"
	I0918 20:01:47.916750   75616 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0918 20:01:47.926199   75616 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0918 20:01:47.926239   75616 status.go:456] ha-317904-m03 apiserver status = Running (err=<nil>)
	I0918 20:01:47.926267   75616 status.go:176] ha-317904-m03 status: &{Name:ha-317904-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:01:47.926291   75616 status.go:174] checking status of ha-317904-m04 ...
	I0918 20:01:47.926726   75616 cli_runner.go:164] Run: docker container inspect ha-317904-m04 --format={{.State.Status}}
	I0918 20:01:47.944774   75616 status.go:364] ha-317904-m04 host status = "Running" (err=<nil>)
	I0918 20:01:47.944799   75616 host.go:66] Checking if "ha-317904-m04" exists ...
	I0918 20:01:47.945154   75616 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-317904-m04
	I0918 20:01:47.962359   75616 host.go:66] Checking if "ha-317904-m04" exists ...
	I0918 20:01:47.962758   75616 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:01:47.962813   75616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-317904-m04
	I0918 20:01:47.982928   75616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/ha-317904-m04/id_rsa Username:docker}
	I0918 20:01:48.090305   75616 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:01:48.108494   75616 status.go:176] ha-317904-m04 status: &{Name:ha-317904-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0918 20:01:48.526016    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (68.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 node start m02 -v=7 --alsologtostderr
E0918 20:01:51.087434    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:52.569568    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:01:56.209502    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:06.451211    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:20.272526    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:02:26.933440    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-317904 node start m02 -v=7 --alsologtostderr: (1m7.131351809s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr: (1.022172921s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (68.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.090232289s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (167.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-317904 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-317904 -v=7 --alsologtostderr
E0918 20:03:07.895463    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-317904 -v=7 --alsologtostderr: (34.566297296s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-317904 --wait=true -v=7 --alsologtostderr
E0918 20:04:29.817013    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-317904 --wait=true -v=7 --alsologtostderr: (2m12.953730789s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-317904
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (167.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-317904 node delete m03 -v=7 --alsologtostderr: (10.586302385s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-317904 stop -v=7 --alsologtostderr: (32.747804823s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr: exit status 7 (128.226646ms)

                                                
                                                
-- stdout --
	ha-317904
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-317904-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-317904-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:06:31.156844  102027 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:06:31.156984  102027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:06:31.156997  102027 out.go:358] Setting ErrFile to fd 2...
	I0918 20:06:31.157003  102027 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:06:31.157326  102027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
	I0918 20:06:31.157532  102027 out.go:352] Setting JSON to false
	I0918 20:06:31.157567  102027 mustload.go:65] Loading cluster: ha-317904
	I0918 20:06:31.157668  102027 notify.go:220] Checking for updates...
	I0918 20:06:31.158036  102027 config.go:182] Loaded profile config "ha-317904": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:06:31.158055  102027 status.go:174] checking status of ha-317904 ...
	I0918 20:06:31.158919  102027 cli_runner.go:164] Run: docker container inspect ha-317904 --format={{.State.Status}}
	I0918 20:06:31.180261  102027 status.go:364] ha-317904 host status = "Stopped" (err=<nil>)
	I0918 20:06:31.180286  102027 status.go:377] host is not running, skipping remaining checks
	I0918 20:06:31.180294  102027 status.go:176] ha-317904 status: &{Name:ha-317904 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:06:31.180329  102027 status.go:174] checking status of ha-317904-m02 ...
	I0918 20:06:31.180655  102027 cli_runner.go:164] Run: docker container inspect ha-317904-m02 --format={{.State.Status}}
	I0918 20:06:31.211529  102027 status.go:364] ha-317904-m02 host status = "Stopped" (err=<nil>)
	I0918 20:06:31.211565  102027 status.go:377] host is not running, skipping remaining checks
	I0918 20:06:31.211572  102027 status.go:176] ha-317904-m02 status: &{Name:ha-317904-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:06:31.211595  102027 status.go:174] checking status of ha-317904-m04 ...
	I0918 20:06:31.211950  102027 cli_runner.go:164] Run: docker container inspect ha-317904-m04 --format={{.State.Status}}
	I0918 20:06:31.229058  102027 status.go:364] ha-317904-m04 host status = "Stopped" (err=<nil>)
	I0918 20:06:31.229083  102027 status.go:377] host is not running, skipping remaining checks
	I0918 20:06:31.229090  102027 status.go:176] ha-317904-m04 status: &{Name:ha-317904-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (160.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-317904 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0918 20:06:45.950667    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:06:52.568466    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:07:13.659388    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-317904 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m39.796441329s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (160.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (50.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-317904 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-317904 --control-plane -v=7 --alsologtostderr: (49.848274127s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-317904 status -v=7 --alsologtostderr: (1.066716876s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (50.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.086925406s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (32.35s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-187593 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-187593 --driver=docker  --container-runtime=docker: (32.353187501s)
--- PASS: TestImageBuild/serial/Setup (32.35s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-187593
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-187593: (2.015394345s)
--- PASS: TestImageBuild/serial/NormalBuild (2.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-187593
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-187593: (1.008561728s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-187593
image_test.go:133: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-187593: (1.036934334s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-187593
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (74.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-510630 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0918 20:11:45.950792    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:11:52.568812    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-510630 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m14.91020677s)
--- PASS: TestJSONOutput/start/Command (74.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-510630 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-510630 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-510630 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-510630 --output=json --user=testUser: (10.975048484s)
--- PASS: TestJSONOutput/stop/Command (10.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-667107 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-667107 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.013857ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2f5abce2-b516-4765-baaa-0c32d9b047ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-667107] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aa3e7ab3-3c48-403c-8036-158b585192e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"6d655014-6c2b-4a93-998c-8bbfcddbe2a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"72459f6a-e695-4c77-89a0-0c46bd2ad7f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig"}}
	{"specversion":"1.0","id":"543d214b-b8dd-4d44-9064-5ce45b8766bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube"}}
	{"specversion":"1.0","id":"92ce05e3-c338-4323-95ec-bfd4c22f0c3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2439e513-36a1-4c9a-9044-43ba1436efaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a978533-06f1-4d6f-af04-36cbc5112bf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-667107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-667107
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-304913 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-304913 --network=: (32.403589189s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-304913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-304913
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-304913: (1.739181005s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.17s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (34.77s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-847850 --network=bridge
E0918 20:13:15.635424    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-847850 --network=bridge: (32.748116804s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-847850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-847850
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-847850: (2.003301861s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (34.77s)

                                                
                                    
x
+
TestKicExistingNetwork (34.87s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0918 20:13:31.699806    7565 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0918 20:13:31.720200    7565 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0918 20:13:31.720279    7565 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0918 20:13:31.720297    7565 cli_runner.go:164] Run: docker network inspect existing-network
W0918 20:13:31.737510    7565 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0918 20:13:31.737542    7565 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0918 20:13:31.737561    7565 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0918 20:13:31.737680    7565 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0918 20:13:31.754658    7565 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3c97df0c2a48 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:1c:0c:98:e2} reservation:<nil>}
I0918 20:13:31.754972    7565 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001677080}
I0918 20:13:31.754995    7565 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0918 20:13:31.755048    7565 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0918 20:13:31.827078    7565 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-107986 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-107986 --network=existing-network: (32.749968494s)
helpers_test.go:175: Cleaning up "existing-network-107986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-107986
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-107986: (1.957218587s)
I0918 20:14:06.551050    7565 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.87s)

                                                
                                    
x
+
TestKicCustomSubnet (32.89s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-117623 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-117623 --subnet=192.168.60.0/24: (30.713920668s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-117623 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-117623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-117623
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-117623: (2.156198648s)
--- PASS: TestKicCustomSubnet (32.89s)

                                                
                                    
x
+
TestKicStaticIP (35.17s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-966779 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-966779 --static-ip=192.168.200.200: (32.842753338s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-966779 ip
helpers_test.go:175: Cleaning up "static-ip-966779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-966779
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-966779: (2.16693398s)
--- PASS: TestKicStaticIP (35.17s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.87s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-159587 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-159587 --driver=docker  --container-runtime=docker: (30.1083314s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-162442 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-162442 --driver=docker  --container-runtime=docker: (38.111641698s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-159587
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-162442
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-162442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-162442
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-162442: (2.09557173s)
helpers_test.go:175: Cleaning up "first-159587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-159587
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-159587: (2.122066834s)
--- PASS: TestMinikubeProfile (73.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (15.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-300567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-300567 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (14.081043809s)
--- PASS: TestMountStart/serial/StartWithMountFirst (15.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-300567 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.46s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-302487 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0918 20:16:45.951390    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-302487 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.463636987s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-302487 ssh -- ls /minikube-host
E0918 20:16:52.568501    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-300567 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-300567 --alsologtostderr -v=5: (1.48090449s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-302487 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-302487
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-302487: (1.207349482s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-302487
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-302487: (8.247132335s)
--- PASS: TestMountStart/serial/RestartStopped (9.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-302487 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-676864 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0918 20:18:09.021625    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-676864 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m22.569450676s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (55.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-676864 -- rollout status deployment/busybox: (3.884684092s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:18:34.523086    7565 retry.go:31] will retry after 1.287501628s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:18:35.954708    7565 retry.go:31] will retry after 1.719765066s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:18:37.828302    7565 retry.go:31] will retry after 3.287561775s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:18:41.272291    7565 retry.go:31] will retry after 2.73997622s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:18:44.182828    7565 retry.go:31] will retry after 5.474586814s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:18:49.806121    7565 retry.go:31] will retry after 6.22143261s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:18:56.179297    7565 retry.go:31] will retry after 6.781928992s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0918 20:19:03.110933    7565 retry.go:31] will retry after 20.339149375s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- exec busybox-7dff88458-87c7q -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- exec busybox-7dff88458-rdscg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- exec busybox-7dff88458-87c7q -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- exec busybox-7dff88458-rdscg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- exec busybox-7dff88458-87c7q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- exec busybox-7dff88458-rdscg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (55.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- exec busybox-7dff88458-87c7q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- exec busybox-7dff88458-87c7q -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- exec busybox-7dff88458-rdscg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-676864 -- exec busybox-7dff88458-rdscg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-676864 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-676864 -v 3 --alsologtostderr: (17.081047145s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.89s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-676864 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp testdata/cp-test.txt multinode-676864:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp multinode-676864:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1233828173/001/cp-test_multinode-676864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp multinode-676864:/home/docker/cp-test.txt multinode-676864-m02:/home/docker/cp-test_multinode-676864_multinode-676864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m02 "sudo cat /home/docker/cp-test_multinode-676864_multinode-676864-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp multinode-676864:/home/docker/cp-test.txt multinode-676864-m03:/home/docker/cp-test_multinode-676864_multinode-676864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m03 "sudo cat /home/docker/cp-test_multinode-676864_multinode-676864-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp testdata/cp-test.txt multinode-676864-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp multinode-676864-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1233828173/001/cp-test_multinode-676864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp multinode-676864-m02:/home/docker/cp-test.txt multinode-676864:/home/docker/cp-test_multinode-676864-m02_multinode-676864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864 "sudo cat /home/docker/cp-test_multinode-676864-m02_multinode-676864.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp multinode-676864-m02:/home/docker/cp-test.txt multinode-676864-m03:/home/docker/cp-test_multinode-676864-m02_multinode-676864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m03 "sudo cat /home/docker/cp-test_multinode-676864-m02_multinode-676864-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp testdata/cp-test.txt multinode-676864-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp multinode-676864-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1233828173/001/cp-test_multinode-676864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp multinode-676864-m03:/home/docker/cp-test.txt multinode-676864:/home/docker/cp-test_multinode-676864-m03_multinode-676864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864 "sudo cat /home/docker/cp-test_multinode-676864-m03_multinode-676864.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 cp multinode-676864-m03:/home/docker/cp-test.txt multinode-676864-m02:/home/docker/cp-test_multinode-676864-m03_multinode-676864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 ssh -n multinode-676864-m02 "sudo cat /home/docker/cp-test_multinode-676864-m03_multinode-676864-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.63s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-676864 node stop m03: (1.23037117s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-676864 status: exit status 7 (543.357245ms)

                                                
                                                
-- stdout --
	multinode-676864
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-676864-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-676864-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-676864 status --alsologtostderr: exit status 7 (539.462005ms)

                                                
                                                
-- stdout --
	multinode-676864
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-676864-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-676864-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:19:57.450139  178513 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:19:57.450338  178513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:19:57.450350  178513 out.go:358] Setting ErrFile to fd 2...
	I0918 20:19:57.450356  178513 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:19:57.450653  178513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
	I0918 20:19:57.450859  178513 out.go:352] Setting JSON to false
	I0918 20:19:57.450909  178513 mustload.go:65] Loading cluster: multinode-676864
	I0918 20:19:57.450984  178513 notify.go:220] Checking for updates...
	I0918 20:19:57.452268  178513 config.go:182] Loaded profile config "multinode-676864": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:19:57.452308  178513 status.go:174] checking status of multinode-676864 ...
	I0918 20:19:57.452989  178513 cli_runner.go:164] Run: docker container inspect multinode-676864 --format={{.State.Status}}
	I0918 20:19:57.472113  178513 status.go:364] multinode-676864 host status = "Running" (err=<nil>)
	I0918 20:19:57.472141  178513 host.go:66] Checking if "multinode-676864" exists ...
	I0918 20:19:57.472465  178513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-676864
	I0918 20:19:57.495488  178513 host.go:66] Checking if "multinode-676864" exists ...
	I0918 20:19:57.495787  178513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:19:57.495846  178513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-676864
	I0918 20:19:57.516750  178513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32911 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/multinode-676864/id_rsa Username:docker}
	I0918 20:19:57.617471  178513 ssh_runner.go:195] Run: systemctl --version
	I0918 20:19:57.623449  178513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:19:57.636646  178513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 20:19:57.696994  178513 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-18 20:19:57.687119958 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0918 20:19:57.697632  178513 kubeconfig.go:125] found "multinode-676864" server: "https://192.168.67.2:8443"
	I0918 20:19:57.697668  178513 api_server.go:166] Checking apiserver status ...
	I0918 20:19:57.697727  178513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 20:19:57.711983  178513 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2296/cgroup
	I0918 20:19:57.721523  178513 api_server.go:182] apiserver freezer: "2:freezer:/docker/9849f9f2ee12a2f6a50a3b8e18e8a477d428aa46235047b03dde5aecbe7aa7bc/kubepods/burstable/pod6f4ac3a65f697a6d70aadd29b9cc2486/3e7899dd4b9a4f304b3aba1e477e0e7f9c8ca0b257c0f71f2081934849e6bc66"
	I0918 20:19:57.721596  178513 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9849f9f2ee12a2f6a50a3b8e18e8a477d428aa46235047b03dde5aecbe7aa7bc/kubepods/burstable/pod6f4ac3a65f697a6d70aadd29b9cc2486/3e7899dd4b9a4f304b3aba1e477e0e7f9c8ca0b257c0f71f2081934849e6bc66/freezer.state
	I0918 20:19:57.737653  178513 api_server.go:204] freezer state: "THAWED"
	I0918 20:19:57.737682  178513 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0918 20:19:57.745317  178513 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0918 20:19:57.745351  178513 status.go:456] multinode-676864 apiserver status = Running (err=<nil>)
	I0918 20:19:57.745362  178513 status.go:176] multinode-676864 status: &{Name:multinode-676864 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:19:57.745382  178513 status.go:174] checking status of multinode-676864-m02 ...
	I0918 20:19:57.745713  178513 cli_runner.go:164] Run: docker container inspect multinode-676864-m02 --format={{.State.Status}}
	I0918 20:19:57.765037  178513 status.go:364] multinode-676864-m02 host status = "Running" (err=<nil>)
	I0918 20:19:57.765064  178513 host.go:66] Checking if "multinode-676864-m02" exists ...
	I0918 20:19:57.765370  178513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-676864-m02
	I0918 20:19:57.786705  178513 host.go:66] Checking if "multinode-676864-m02" exists ...
	I0918 20:19:57.787068  178513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 20:19:57.787124  178513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-676864-m02
	I0918 20:19:57.805122  178513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32916 SSHKeyPath:/home/jenkins/minikube-integration/19667-2236/.minikube/machines/multinode-676864-m02/id_rsa Username:docker}
	I0918 20:19:57.904533  178513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 20:19:57.917181  178513 status.go:176] multinode-676864-m02 status: &{Name:multinode-676864-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:19:57.917218  178513 status.go:174] checking status of multinode-676864-m03 ...
	I0918 20:19:57.917555  178513 cli_runner.go:164] Run: docker container inspect multinode-676864-m03 --format={{.State.Status}}
	I0918 20:19:57.936221  178513 status.go:364] multinode-676864-m03 host status = "Stopped" (err=<nil>)
	I0918 20:19:57.936246  178513 status.go:377] host is not running, skipping remaining checks
	I0918 20:19:57.936252  178513 status.go:176] multinode-676864-m03 status: &{Name:multinode-676864-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-676864 node start m03 -v=7 --alsologtostderr: (10.919493421s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-676864
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-676864
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-676864: (22.564645567s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-676864 --wait=true -v=8 --alsologtostderr
E0918 20:21:45.950735    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-676864 --wait=true -v=8 --alsologtostderr: (1m15.631323428s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-676864
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 node delete m03
E0918 20:21:52.568728    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-676864 node delete m03: (5.04621247s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-676864 stop: (21.390471465s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-676864 status: exit status 7 (112.638773ms)

                                                
                                                
-- stdout --
	multinode-676864
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-676864-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-676864 status --alsologtostderr: exit status 7 (100.663071ms)

                                                
                                                
-- stdout --
	multinode-676864
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-676864-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 20:22:15.310301  192100 out.go:345] Setting OutFile to fd 1 ...
	I0918 20:22:15.310506  192100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:22:15.310531  192100 out.go:358] Setting ErrFile to fd 2...
	I0918 20:22:15.310550  192100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0918 20:22:15.310834  192100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19667-2236/.minikube/bin
	I0918 20:22:15.311066  192100 out.go:352] Setting JSON to false
	I0918 20:22:15.311139  192100 mustload.go:65] Loading cluster: multinode-676864
	I0918 20:22:15.311204  192100 notify.go:220] Checking for updates...
	I0918 20:22:15.312581  192100 config.go:182] Loaded profile config "multinode-676864": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0918 20:22:15.312641  192100 status.go:174] checking status of multinode-676864 ...
	I0918 20:22:15.313422  192100 cli_runner.go:164] Run: docker container inspect multinode-676864 --format={{.State.Status}}
	I0918 20:22:15.330424  192100 status.go:364] multinode-676864 host status = "Stopped" (err=<nil>)
	I0918 20:22:15.330444  192100 status.go:377] host is not running, skipping remaining checks
	I0918 20:22:15.330450  192100 status.go:176] multinode-676864 status: &{Name:multinode-676864 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 20:22:15.330484  192100 status.go:174] checking status of multinode-676864-m02 ...
	I0918 20:22:15.330802  192100 cli_runner.go:164] Run: docker container inspect multinode-676864-m02 --format={{.State.Status}}
	I0918 20:22:15.359862  192100 status.go:364] multinode-676864-m02 host status = "Stopped" (err=<nil>)
	I0918 20:22:15.359880  192100 status.go:377] host is not running, skipping remaining checks
	I0918 20:22:15.359886  192100 status.go:176] multinode-676864-m02 status: &{Name:multinode-676864-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (58.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-676864 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-676864 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (57.342370839s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-676864 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (58.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-676864
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-676864-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-676864-m02 --driver=docker  --container-runtime=docker: exit status 14 (99.382925ms)

                                                
                                                
-- stdout --
	* [multinode-676864-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-676864-m02' is duplicated with machine name 'multinode-676864-m02' in profile 'multinode-676864'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-676864-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-676864-m03 --driver=docker  --container-runtime=docker: (36.467182479s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-676864
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-676864: exit status 80 (380.508663ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-676864 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-676864-m03 already exists in multinode-676864-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-676864-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-676864-m03: (2.113582616s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.11s)

                                                
                                    
x
+
TestPreload (104.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-978419 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-978419 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m5.012260052s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-978419 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-978419 image pull gcr.io/k8s-minikube/busybox: (2.144906381s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-978419
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-978419: (10.887699912s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-978419 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-978419 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (23.463245352s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-978419 image list
helpers_test.go:175: Cleaning up "test-preload-978419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-978419
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-978419: (2.388204141s)
--- PASS: TestPreload (104.36s)

                                                
                                    
x
+
TestScheduledStopUnix (106.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-133669 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-133669 --memory=2048 --driver=docker  --container-runtime=docker: (32.980301801s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-133669 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-133669 -n scheduled-stop-133669
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-133669 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0918 20:26:14.360358    7565 retry.go:31] will retry after 105.555µs: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.361532    7565 retry.go:31] will retry after 114.589µs: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.362686    7565 retry.go:31] will retry after 172.62µs: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.363825    7565 retry.go:31] will retry after 422.377µs: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.364976    7565 retry.go:31] will retry after 412.25µs: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.366118    7565 retry.go:31] will retry after 458.479µs: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.367340    7565 retry.go:31] will retry after 1.061231ms: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.368487    7565 retry.go:31] will retry after 1.94712ms: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.370695    7565 retry.go:31] will retry after 1.865656ms: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.372862    7565 retry.go:31] will retry after 5.126709ms: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.378986    7565 retry.go:31] will retry after 6.577751ms: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.386259    7565 retry.go:31] will retry after 11.351807ms: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.398562    7565 retry.go:31] will retry after 11.087561ms: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.410800    7565 retry.go:31] will retry after 22.215805ms: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
I0918 20:26:14.434083    7565 retry.go:31] will retry after 36.426524ms: open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/scheduled-stop-133669/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-133669 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-133669 -n scheduled-stop-133669
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-133669
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-133669 --schedule 15s
E0918 20:26:45.952236    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:26:52.569057    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-133669
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-133669: exit status 7 (70.307289ms)

                                                
                                                
-- stdout --
	scheduled-stop-133669
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-133669 -n scheduled-stop-133669
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-133669 -n scheduled-stop-133669: exit status 7 (69.76204ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-133669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-133669
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-133669: (1.677464438s)
--- PASS: TestScheduledStopUnix (106.27s)

                                                
                                    
x
+
TestSkaffold (121.67s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe415966586 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-595037 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-595037 --memory=2600 --driver=docker  --container-runtime=docker: (31.283024155s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe415966586 run --minikube-profile skaffold-595037 --kube-context skaffold-595037 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe415966586 run --minikube-profile skaffold-595037 --kube-context skaffold-595037 --status-check=true --port-forward=false --interactive=false: (1m15.25951755s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7dd77f98b-bcpp9" [deac4dc6-4e60-4791-96b4-727d5e91076f] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004727948s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5f95dcf44d-nszlq" [ecdcdd2a-5266-493f-93dc-9e5e545e63ac] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004027858s
helpers_test.go:175: Cleaning up "skaffold-595037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-595037
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-595037: (2.910920866s)
--- PASS: TestSkaffold (121.67s)

                                                
                                    
x
+
TestInsufficientStorage (11.97s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-199878 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-199878 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.63787308s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"14e0d88c-c361-4af2-abba-513092c96751","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-199878] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"407f8313-8c25-4b82-aa13-56d2b637b142","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19667"}}
	{"specversion":"1.0","id":"2c8d7fe2-dd4b-411d-98a0-ecbe963b4300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"41b988f4-aa5f-4c97-9c43-70a10058c36e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig"}}
	{"specversion":"1.0","id":"980ab0ab-1501-47ee-948d-a9e39bf13524","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube"}}
	{"specversion":"1.0","id":"7f717c0f-d614-479b-afdd-8f01d462c33b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"2f7c56a5-2a3d-462c-a99d-89654896722d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1e0b5d88-a05a-45f8-9700-da5425f71c58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"39a6783a-088a-4ead-bee9-98bd99c5ff98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"76cda634-e014-4b96-ac29-b61f909dc8a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a1104a0-fe2c-46e4-bec8-38f19cc6347e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4f832574-9fa6-440e-b080-953f0229edeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-199878\" primary control-plane node in \"insufficient-storage-199878\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a483bf9b-1cd6-4a08-baa6-2ad727e3b8bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ef67441-4568-4ee1-aaae-5e70b31eb8f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3232caa0-0cb8-42f7-88a7-e2a4a054caf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-199878 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-199878 --output=json --layout=cluster: exit status 7 (295.649554ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-199878","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-199878","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:29:38.716434  226279 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-199878" does not appear in /home/jenkins/minikube-integration/19667-2236/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-199878 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-199878 --output=json --layout=cluster: exit status 7 (296.89096ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-199878","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-199878","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 20:29:39.014167  226339 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-199878" does not appear in /home/jenkins/minikube-integration/19667-2236/kubeconfig
	E0918 20:29:39.024973  226339 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/insufficient-storage-199878/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-199878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-199878
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-199878: (1.736981816s)
--- PASS: TestInsufficientStorage (11.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (102.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1883996797 start -p running-upgrade-939486 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1883996797 start -p running-upgrade-939486 --memory=2200 --vm-driver=docker  --container-runtime=docker: (39.796912033s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-939486 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0918 20:36:45.951474    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:36:52.568438    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:36:58.729340    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-939486 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (58.372147277s)
helpers_test.go:175: Cleaning up "running-upgrade-939486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-939486
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-939486: (2.208058469s)
--- PASS: TestRunningBinaryUpgrade (102.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (393.27s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-109338 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0918 20:31:45.951237    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:31:52.568763    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-109338 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.232708589s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-109338
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-109338: (1.502399863s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-109338 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-109338 status --format={{.Host}}: exit status 7 (116.682322ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-109338 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-109338 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m48.54650824s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-109338 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-109338 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-109338 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (108.601513ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-109338] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-109338
	    minikube start -p kubernetes-upgrade-109338 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1093382 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-109338 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-109338 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-109338 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (36.88888357s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-109338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-109338
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-109338: (2.748742145s)
--- PASS: TestKubernetesUpgrade (393.27s)

                                                
                                    
x
+
TestMissingContainerUpgrade (179.09s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2922786810 start -p missing-upgrade-377797 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2922786810 start -p missing-upgrade-377797 --memory=2200 --driver=docker  --container-runtime=docker: (1m41.859118176s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-377797
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-377797: (10.468940548s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-377797
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-377797 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-377797 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m1.971756229s)
helpers_test.go:175: Cleaning up "missing-upgrade-377797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-377797
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-377797: (2.319566386s)
--- PASS: TestMissingContainerUpgrade (179.09s)

                                                
                                    
x
+
TestPause/serial/Start (57.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-676291 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0918 20:29:55.637244    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-676291 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (57.217923625s)
--- PASS: TestPause/serial/Start (57.22s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-676291 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-676291 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.938940774s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (32.96s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-676291 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-676291 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-676291 --output=json --layout=cluster: exit status 2 (497.031424ms)

                                                
                                                
-- stdout --
	{"Name":"pause-676291","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-676291","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.50s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-676291 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.08s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-676291 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-676291 --alsologtostderr -v=5: (1.077245799s)
--- PASS: TestPause/serial/PauseAgain (1.08s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-676291 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-676291 --alsologtostderr -v=5: (2.981081466s)
--- PASS: TestPause/serial/DeletePaused (2.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-676291
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-676291: exit status 1 (21.406194ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-676291: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (92.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2744389752 start -p stopped-upgrade-485649 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0918 20:34:14.864799    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:14.871659    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:14.883034    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:14.904394    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:14.946515    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:15.027881    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:15.189968    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:15.511654    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:16.154762    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:17.436879    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:19.998842    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:25.120560    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:35.362678    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:49.023720    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:34:55.844013    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2744389752 start -p stopped-upgrade-485649 --memory=2200 --vm-driver=docker  --container-runtime=docker: (50.113922653s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2744389752 -p stopped-upgrade-485649 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2744389752 -p stopped-upgrade-485649 stop: (10.924066266s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-485649 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0918 20:35:36.805909    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-485649 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.483883957s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (92.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-485649
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-485649: (1.371683487s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-322872 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-322872 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (104.259651ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-322872] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19667
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19667-2236/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19667-2236/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-322872 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-322872 --driver=docker  --container-runtime=docker: (43.250630652s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-322872 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-322872 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-322872 --no-kubernetes --driver=docker  --container-runtime=docker: (15.770531195s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-322872 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-322872 status -o json: exit status 2 (406.613147ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-322872","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-322872
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-322872: (1.98809738s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (12.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-322872 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-322872 --no-kubernetes --driver=docker  --container-runtime=docker: (12.329737553s)
--- PASS: TestNoKubernetes/serial/Start (12.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-322872 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-322872 "sudo systemctl is-active --quiet service kubelet": exit status 1 (346.008965ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-322872
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-322872: (2.597318934s)
--- PASS: TestNoKubernetes/serial/Stop (2.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-322872 --driver=docker  --container-runtime=docker
E0918 20:39:14.861230    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-322872 --driver=docker  --container-runtime=docker: (9.490872892s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-322872 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-322872 "sudo systemctl is-active --quiet service kubelet": exit status 1 (383.516964ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (168.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-959748 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0918 20:41:45.951502    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:41:52.568488    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-959748 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m48.813027631s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (168.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-689561 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-689561 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m16.419450644s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-959748 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1cf1cc40-d1b3-4649-980f-30f471fb757c] Pending
helpers_test.go:344: "busybox" [1cf1cc40-d1b3-4649-980f-30f471fb757c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1cf1cc40-d1b3-4649-980f-30f471fb757c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003665106s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-959748 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-959748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-959748 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.440683324s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-959748 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-959748 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-959748 --alsologtostderr -v=3: (11.379969308s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-959748 -n old-k8s-version-959748
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-959748 -n old-k8s-version-959748: exit status 7 (133.286484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-959748 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-689561 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7bd61b9e-6436-4f53-b825-488d0db25321] Pending
helpers_test.go:344: "busybox" [7bd61b9e-6436-4f53-b825-488d0db25321] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7bd61b9e-6436-4f53-b825-488d0db25321] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004399937s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-689561 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-689561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-689561 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.455584369s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-689561 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-689561 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-689561 --alsologtostderr -v=3: (11.094587227s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689561 -n default-k8s-diff-port-689561
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689561 -n default-k8s-diff-port-689561: exit status 7 (92.100678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-689561 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-689561 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0918 20:46:35.639428    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:46:45.951185    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:46:52.568447    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:49:14.861608    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-689561 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.312555684s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-689561 -n default-k8s-diff-port-689561
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-786g2" [6028f28d-95a0-45af-befb-5ee0ca8539a3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004067946s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-786g2" [6028f28d-95a0-45af-befb-5ee0ca8539a3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003806364s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-689561 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-689561 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-689561 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-689561 -n default-k8s-diff-port-689561
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-689561 -n default-k8s-diff-port-689561: exit status 2 (371.534358ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-689561 -n default-k8s-diff-port-689561
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-689561 -n default-k8s-diff-port-689561: exit status 2 (337.939842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-689561 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-689561 -n default-k8s-diff-port-689561
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-689561 -n default-k8s-diff-port-689561
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-845058 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-845058 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (53.137451661s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-llwmw" [c92dbc3d-a6b5-4d7c-a614-28b20f9732a9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004082408s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-llwmw" [c92dbc3d-a6b5-4d7c-a614-28b20f9732a9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00426925s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-959748 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-959748 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-959748 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-959748 -n old-k8s-version-959748
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-959748 -n old-k8s-version-959748: exit status 2 (367.903725ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-959748 -n old-k8s-version-959748
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-959748 -n old-k8s-version-959748: exit status 2 (328.331027ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-959748 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-959748 -n old-k8s-version-959748
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-959748 -n old-k8s-version-959748
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-747891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-747891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m30.136564152s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-845058 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0e40a56c-6aab-453d-8ec6-1bbb36bfce03] Pending
helpers_test.go:344: "busybox" [0e40a56c-6aab-453d-8ec6-1bbb36bfce03] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0e40a56c-6aab-453d-8ec6-1bbb36bfce03] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00371472s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-845058 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-845058 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-845058 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.38442349s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-845058 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-845058 --alsologtostderr -v=3
E0918 20:50:37.933720    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-845058 --alsologtostderr -v=3: (11.158688478s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-845058 -n embed-certs-845058
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-845058 -n embed-certs-845058: exit status 7 (77.919365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-845058 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (274.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-845058 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0918 20:51:29.025775    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:45.951474    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:51:52.568402    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-845058 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m34.162054242s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-845058 -n embed-certs-845058
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (274.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-747891 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9e7e692e-9de9-4ea9-8491-c11779948e27] Pending
helpers_test.go:344: "busybox" [9e7e692e-9de9-4ea9-8491-c11779948e27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9e7e692e-9de9-4ea9-8491-c11779948e27] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.005087816s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-747891 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-747891 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-747891 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-747891 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-747891 --alsologtostderr -v=3: (11.080324621s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-747891 -n no-preload-747891
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-747891 -n no-preload-747891: exit status 7 (77.14445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-747891 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-747891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0918 20:53:26.795884    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:26.802375    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:26.813830    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:26.835291    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:26.876645    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:26.958143    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:27.119385    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:27.441025    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:28.083664    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:29.365127    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:31.927370    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:37.050186    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:53:47.292021    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:07.773950    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:14.862104    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:25.427983    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:25.434436    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:25.445932    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:25.467291    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:25.508701    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:25.590192    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:25.751659    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:26.073628    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:26.715718    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:27.997075    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:30.559432    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:35.680722    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:45.922251    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:54:48.735412    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:55:06.404217    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-747891 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.96351818s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-747891 -n no-preload-747891
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7zvgg" [6ad8b519-5c74-4551-af6c-55877aa904e0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003588327s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7zvgg" [6ad8b519-5c74-4551-af6c-55877aa904e0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017477532s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-845058 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-845058 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-845058 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-845058 -n embed-certs-845058
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-845058 -n embed-certs-845058: exit status 2 (357.213219ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-845058 -n embed-certs-845058
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-845058 -n embed-certs-845058: exit status 2 (351.519939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-845058 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-845058 -n embed-certs-845058
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-845058 -n embed-certs-845058
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-663914 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0918 20:55:47.365515    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:56:10.657260    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-663914 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (39.276915471s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-663914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-663914 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.097453089s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-663914 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-663914 --alsologtostderr -v=3: (11.151341971s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-663914 -n newest-cni-663914
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-663914 -n newest-cni-663914: exit status 7 (111.994636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-663914 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-663914 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-663914 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (19.226521403s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-663914 -n newest-cni-663914
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4b9vz" [7c437d7f-9cda-4d6f-95ad-cebdc5c25962] Running
E0918 20:56:45.951459    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004295208s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4b9vz" [7c437d7f-9cda-4d6f-95ad-cebdc5c25962] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003490985s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-747891 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-663914 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-663914 --alsologtostderr -v=1
E0918 20:56:52.568533    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-663914 -n newest-cni-663914
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-663914 -n newest-cni-663914: exit status 2 (324.049294ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-663914 -n newest-cni-663914
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-663914 -n newest-cni-663914: exit status 2 (322.817145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-663914 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-663914 -n newest-cni-663914
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-663914 -n newest-cni-663914
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-747891 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-747891 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-747891 --alsologtostderr -v=1: (1.095071712s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-747891 -n no-preload-747891
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-747891 -n no-preload-747891: exit status 2 (450.526697ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-747891 -n no-preload-747891
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-747891 -n no-preload-747891: exit status 2 (465.310147ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-747891 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-747891 -n no-preload-747891
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-747891 -n no-preload-747891
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.42s)
E0918 21:04:25.427949    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:04:36.992262    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:04:40.118941    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:01.497097    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:01.503590    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:01.515059    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:01.536531    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:01.578070    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:01.659523    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:01.821045    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:02.142788    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:02.785106    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:04.066611    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:06.628490    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:09.067092    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:09.073576    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:09.085059    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:09.106642    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:09.148207    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:09.229612    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:09.391044    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:09.712716    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:10.354824    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:11.636658    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:11.750191    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:05:14.198206    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (59.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (59.172909285s)
--- PASS: TestNetworkPlugins/group/auto/Start (59.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (74.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0918 20:57:09.287416    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m14.00552119s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (74.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-274721 "pgrep -a kubelet"
I0918 20:57:57.769191    7565 config.go:182] Loaded profile config "auto-274721": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-274721 replace --force -f testdata/netcat-deployment.yaml
I0918 20:57:58.174411    7565 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8zscd" [493b0213-aa1a-4648-887d-afda68739973] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8zscd" [493b0213-aa1a-4648-887d-afda68739973] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006570994s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-274721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-j77bh" [764723b4-e432-4b48-a870-9ad865660ed1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004389315s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-274721 "pgrep -a kubelet"
I0918 20:58:24.584494    7565 config.go:182] Loaded profile config "kindnet-274721": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-274721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m8tjs" [2c21a284-c25f-49e9-8616-9eca5503d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0918 20:58:26.796688    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-m8tjs" [2c21a284-c25f-49e9-8616-9eca5503d8fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.01135657s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (88.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m28.510139904s)
--- PASS: TestNetworkPlugins/group/calico/Start (88.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-274721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0918 20:59:14.862081    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:59:25.427979    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
E0918 20:59:53.129675    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/default-k8s-diff-port-689561/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m5.519239504s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-d2wpn" [dc33cd17-33cf-4963-8689-adf6462227f1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005641529s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-274721 "pgrep -a kubelet"
I0918 21:00:07.802714    7565 config.go:182] Loaded profile config "calico-274721": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-274721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p8rz2" [59900fee-37ac-401a-ad77-fdbe7712c675] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p8rz2" [59900fee-37ac-401a-ad77-fdbe7712c675] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.006122823s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-274721 "pgrep -a kubelet"
I0918 21:00:08.765104    7565 config.go:182] Loaded profile config "custom-flannel-274721": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-274721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h6sgs" [dfa72f29-7ac8-4cba-beb7-f547a58a32bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h6sgs" [dfa72f29-7ac8-4cba-beb7-f547a58a32bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004521403s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-274721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-274721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (55.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (55.276623878s)
--- PASS: TestNetworkPlugins/group/false/Start (55.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (52.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (52.428272443s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (52.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-274721 "pgrep -a kubelet"
I0918 21:01:42.485175    7565 config.go:182] Loaded profile config "enable-default-cni-274721": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-274721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vkbrj" [857e80cb-bd49-42d6-940d-9a3e0cab682e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vkbrj" [857e80cb-bd49-42d6-940d-9a3e0cab682e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003459975s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-274721 "pgrep -a kubelet"
I0918 21:01:45.301767    7565 config.go:182] Loaded profile config "false-274721": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-274721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x7l2b" [bd5a8ca9-6d42-4ce4-b556-bdd5d94c4385] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0918 21:01:45.950872    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/functional-325340/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-x7l2b" [bd5a8ca9-6d42-4ce4-b556-bdd5d94c4385] Running
E0918 21:01:52.568548    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:01:53.129580    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:01:53.136046    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:01:53.147550    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:01:53.168956    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:01:53.210428    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:01:53.291776    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:01:53.453422    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:01:53.775803    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:01:54.418232    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.003787715s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-274721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-274721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m2.94732823s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0918 21:02:34.108462    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:02:58.138795    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:02:58.145634    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:02:58.156985    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:02:58.178395    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:02:58.219831    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:02:58.301222    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:02:58.462635    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:02:58.783957    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:02:59.425737    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:00.707023    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:03.268929    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:08.390198    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:15.070687    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/no-preload-747891/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:15.640987    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/addons-923322/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:18.182241    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:18.188595    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:18.199942    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:18.221299    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:18.262652    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:18.344030    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:18.505473    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:18.631922    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:18.827863    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:19.469531    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:20.751517    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m20.595854587s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rdtrt" [9cca4c11-dd5c-4877-b202-e2908ed9d657] Running
E0918 21:03:23.312849    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:26.796293    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/old-k8s-version-959748/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:28.434225    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00506652s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-274721 "pgrep -a kubelet"
I0918 21:03:29.013156    7565 config.go:182] Loaded profile config "flannel-274721": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-274721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9zxxw" [4df380a2-3f24-4ff8-9ea8-c2919fb75b4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9zxxw" [4df380a2-3f24-4ff8-9ea8-c2919fb75b4b] Running
E0918 21:03:38.675625    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/kindnet-274721/client.crt: no such file or directory" logger="UnhandledError"
E0918 21:03:39.113611    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/auto-274721/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004445157s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-274721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-274721 "pgrep -a kubelet"
I0918 21:03:44.820465    7565 config.go:182] Loaded profile config "bridge-274721": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-274721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gh4cg" [fbc93a4d-dfbf-46b8-a393-7a4b9b7ebe00] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gh4cg" [fbc93a4d-dfbf-46b8-a393-7a4b9b7ebe00] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.004811586s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-274721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (74.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0918 21:04:14.862082    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/skaffold-595037/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-274721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m14.694701745s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (74.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-274721 "pgrep -a kubelet"
E0918 21:05:19.320186    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
I0918 21:05:19.360675    7565 config.go:182] Loaded profile config "kubenet-274721": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-274721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rztlw" [b5b9eba1-6c63-49f9-83d0-891299fbdb6a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0918 21:05:21.992450    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/calico-274721/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-rztlw" [b5b9eba1-6c63-49f9-83d0-891299fbdb6a] Running
E0918 21:05:29.562301    7565 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19667-2236/.minikube/profiles/custom-flannel-274721/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004246465s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-274721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-274721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.21s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-404631 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-404631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-404631
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-841421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-841421
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-274721 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-274721" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-274721

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-274721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-274721"

                                                
                                                
----------------------- debugLogs end: cilium-274721 [took: 5.011359139s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-274721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-274721
--- SKIP: TestNetworkPlugins/group/cilium (5.21s)

                                                
                                    
Copied to clipboard