Test Report: Docker_Linux_docker_arm64 18966

                    
                      6c595620fab5adb75898ef5927d180f0ecb72463:2024-05-28:34666
                    
                

Test fail (3/343)

Order failed test Duration
30 TestAddons/parallel/Ingress 37.54
82 TestFunctional/serial/ComponentHealth 2.08
372 TestStartStop/group/old-k8s-version/serial/SecondStart 374.3
x
+
TestAddons/parallel/Ingress (37.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-885631 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-885631 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-885631 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [84827ca0-b167-4405-9c23-db2a6945fd74] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [84827ca0-b167-4405-9c23-db2a6945fd74] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.005441396s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-885631 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:299: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.07461821s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:301: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:305: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-885631 addons disable ingress --alsologtostderr -v=1: (7.904793361s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-885631
helpers_test.go:235: (dbg) docker inspect addons-885631:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71208480f1d7284e43f91997071287f4452a2870aeaf45df376ea7a1569d8426",
	        "Created": "2024-05-28T20:57:20.254909845Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1071406,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-28T20:57:20.550045855Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:acea75078737755d2f999491dfa245ea1d1040bffc73283b8c9ba9ff1fde89b5",
	        "ResolvConfPath": "/var/lib/docker/containers/71208480f1d7284e43f91997071287f4452a2870aeaf45df376ea7a1569d8426/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71208480f1d7284e43f91997071287f4452a2870aeaf45df376ea7a1569d8426/hostname",
	        "HostsPath": "/var/lib/docker/containers/71208480f1d7284e43f91997071287f4452a2870aeaf45df376ea7a1569d8426/hosts",
	        "LogPath": "/var/lib/docker/containers/71208480f1d7284e43f91997071287f4452a2870aeaf45df376ea7a1569d8426/71208480f1d7284e43f91997071287f4452a2870aeaf45df376ea7a1569d8426-json.log",
	        "Name": "/addons-885631",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-885631:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-885631",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d25a2509e2bce973561c5438d1d00b787d171fe17a4e869f7990d9e084d8c46-init/diff:/var/lib/docker/overlay2/8e655f7297a0818a5a7e390e8907c6f4d26023cd8c9930299bc7c4352e4766d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d25a2509e2bce973561c5438d1d00b787d171fe17a4e869f7990d9e084d8c46/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d25a2509e2bce973561c5438d1d00b787d171fe17a4e869f7990d9e084d8c46/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d25a2509e2bce973561c5438d1d00b787d171fe17a4e869f7990d9e084d8c46/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-885631",
	                "Source": "/var/lib/docker/volumes/addons-885631/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-885631",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-885631",
	                "name.minikube.sigs.k8s.io": "addons-885631",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d85801e6773ddf6ec61449dcfda388fe4f8874547bb0b8ab460bb86ced52aaf3",
	            "SandboxKey": "/var/run/docker/netns/d85801e6773d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33928"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33927"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33924"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33926"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33925"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-885631": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "3ea708a5af2f86175d6702e53deffa8ce92efd17fa19040492b68b85e1063b54",
	                    "EndpointID": "0cc0cf8e9075302b2d29a80a17bb7bf9f4e7812df06754846c4cc19d758d8fab",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-885631",
	                        "71208480f1d7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-885631 -n addons-885631
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-885631 logs -n 25: (1.12316627s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-017885   | jenkins | v1.33.1 | 28 May 24 20:56 UTC |                     |
	|         | -p download-only-017885              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 28 May 24 20:56 UTC | 28 May 24 20:56 UTC |
	| delete  | -p download-only-017885              | download-only-017885   | jenkins | v1.33.1 | 28 May 24 20:56 UTC | 28 May 24 20:56 UTC |
	| start   | -o=json --download-only              | download-only-478907   | jenkins | v1.33.1 | 28 May 24 20:56 UTC |                     |
	|         | -p download-only-478907              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 28 May 24 20:56 UTC | 28 May 24 20:56 UTC |
	| delete  | -p download-only-478907              | download-only-478907   | jenkins | v1.33.1 | 28 May 24 20:56 UTC | 28 May 24 20:56 UTC |
	| delete  | -p download-only-017885              | download-only-017885   | jenkins | v1.33.1 | 28 May 24 20:56 UTC | 28 May 24 20:56 UTC |
	| delete  | -p download-only-478907              | download-only-478907   | jenkins | v1.33.1 | 28 May 24 20:56 UTC | 28 May 24 20:56 UTC |
	| start   | --download-only -p                   | download-docker-725858 | jenkins | v1.33.1 | 28 May 24 20:56 UTC |                     |
	|         | download-docker-725858               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-725858            | download-docker-725858 | jenkins | v1.33.1 | 28 May 24 20:56 UTC | 28 May 24 20:56 UTC |
	| start   | --download-only -p                   | binary-mirror-848255   | jenkins | v1.33.1 | 28 May 24 20:56 UTC |                     |
	|         | binary-mirror-848255                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42511               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-848255              | binary-mirror-848255   | jenkins | v1.33.1 | 28 May 24 20:56 UTC | 28 May 24 20:56 UTC |
	| addons  | disable dashboard -p                 | addons-885631          | jenkins | v1.33.1 | 28 May 24 20:56 UTC |                     |
	|         | addons-885631                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-885631          | jenkins | v1.33.1 | 28 May 24 20:56 UTC |                     |
	|         | addons-885631                        |                        |         |         |                     |                     |
	| start   | -p addons-885631 --wait=true         | addons-885631          | jenkins | v1.33.1 | 28 May 24 20:56 UTC | 28 May 24 21:00 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-885631          | jenkins | v1.33.1 | 28 May 24 21:00 UTC | 28 May 24 21:00 UTC |
	|         | -p addons-885631                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ip      | addons-885631 ip                     | addons-885631          | jenkins | v1.33.1 | 28 May 24 21:00 UTC | 28 May 24 21:00 UTC |
	| addons  | addons-885631 addons disable         | addons-885631          | jenkins | v1.33.1 | 28 May 24 21:00 UTC | 28 May 24 21:01 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-885631 addons                 | addons-885631          | jenkins | v1.33.1 | 28 May 24 21:01 UTC | 28 May 24 21:01 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-885631          | jenkins | v1.33.1 | 28 May 24 21:01 UTC | 28 May 24 21:01 UTC |
	|         | addons-885631                        |                        |         |         |                     |                     |
	| ssh     | addons-885631 ssh curl -s            | addons-885631          | jenkins | v1.33.1 | 28 May 24 21:01 UTC | 28 May 24 21:01 UTC |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-885631 ip                     | addons-885631          | jenkins | v1.33.1 | 28 May 24 21:01 UTC | 28 May 24 21:01 UTC |
	| addons  | addons-885631 addons disable         | addons-885631          | jenkins | v1.33.1 | 28 May 24 21:01 UTC | 28 May 24 21:01 UTC |
	|         | ingress-dns --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-885631 addons disable         | addons-885631          | jenkins | v1.33.1 | 28 May 24 21:01 UTC | 28 May 24 21:01 UTC |
	|         | ingress --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-885631 addons                 | addons-885631          | jenkins | v1.33.1 | 28 May 24 21:01 UTC |                     |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 20:56:56
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 20:56:56.396420 1070943 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:56:56.396545 1070943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:56:56.396562 1070943 out.go:304] Setting ErrFile to fd 2...
	I0528 20:56:56.396568 1070943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:56:56.396889 1070943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	I0528 20:56:56.397393 1070943 out.go:298] Setting JSON to false
	I0528 20:56:56.402208 1070943 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16766,"bootTime":1716913051,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0528 20:56:56.402327 1070943 start.go:139] virtualization:  
	I0528 20:56:56.405175 1070943 out.go:177] * [addons-885631] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 20:56:56.408205 1070943 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 20:56:56.410396 1070943 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:56:56.408410 1070943 notify.go:220] Checking for updates...
	I0528 20:56:56.415400 1070943 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 20:56:56.417424 1070943 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	I0528 20:56:56.419461 1070943 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 20:56:56.421557 1070943 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 20:56:56.423920 1070943 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:56:56.444755 1070943 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 20:56:56.444876 1070943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 20:56:56.509640 1070943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-05-28 20:56:56.499801576 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 20:56:56.509751 1070943 docker.go:295] overlay module found
	I0528 20:56:56.511882 1070943 out.go:177] * Using the docker driver based on user configuration
	I0528 20:56:56.513813 1070943 start.go:297] selected driver: docker
	I0528 20:56:56.513835 1070943 start.go:901] validating driver "docker" against <nil>
	I0528 20:56:56.513849 1070943 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 20:56:56.514579 1070943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 20:56:56.567518 1070943 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-05-28 20:56:56.558681968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 20:56:56.567694 1070943 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 20:56:56.567924 1070943 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:56:56.570252 1070943 out.go:177] * Using Docker driver with root privileges
	I0528 20:56:56.572113 1070943 cni.go:84] Creating CNI manager for ""
	I0528 20:56:56.572147 1070943 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 20:56:56.572159 1070943 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 20:56:56.572254 1070943 start.go:340] cluster config:
	{Name:addons-885631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-885631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:56:56.574586 1070943 out.go:177] * Starting "addons-885631" primary control-plane node in "addons-885631" cluster
	I0528 20:56:56.576435 1070943 cache.go:121] Beginning downloading kic base image for docker with docker
	I0528 20:56:56.578296 1070943 out.go:177] * Pulling base image v0.0.44-1716228441-18934 ...
	I0528 20:56:56.579841 1070943 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 20:56:56.579899 1070943 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0528 20:56:56.579906 1070943 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 20:56:56.579915 1070943 cache.go:56] Caching tarball of preloaded images
	I0528 20:56:56.579993 1070943 preload.go:173] Found /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0528 20:56:56.580003 1070943 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 20:56:56.580356 1070943 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/config.json ...
	I0528 20:56:56.580388 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/config.json: {Name:mka69c339f37366dc896f9ab840eb98f3ffbf607 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:56:56.593082 1070943 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 to local cache
	I0528 20:56:56.593191 1070943 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory
	I0528 20:56:56.593211 1070943 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory, skipping pull
	I0528 20:56:56.593215 1070943 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 exists in cache, skipping pull
	I0528 20:56:56.593222 1070943 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 as a tarball
	I0528 20:56:56.593228 1070943 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 from local cache
	I0528 20:57:13.475300 1070943 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 from cached tarball
	I0528 20:57:13.475338 1070943 cache.go:194] Successfully downloaded all kic artifacts
	I0528 20:57:13.475369 1070943 start.go:360] acquireMachinesLock for addons-885631: {Name:mkc50e5f2d64d77145b532701a76c49a378025c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 20:57:13.475498 1070943 start.go:364] duration metric: took 107.961µs to acquireMachinesLock for "addons-885631"
	I0528 20:57:13.475532 1070943 start.go:93] Provisioning new machine with config: &{Name:addons-885631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-885631 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 20:57:13.475629 1070943 start.go:125] createHost starting for "" (driver="docker")
	I0528 20:57:13.480731 1070943 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0528 20:57:13.481009 1070943 start.go:159] libmachine.API.Create for "addons-885631" (driver="docker")
	I0528 20:57:13.481057 1070943 client.go:168] LocalClient.Create starting
	I0528 20:57:13.481200 1070943 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem
	I0528 20:57:13.852130 1070943 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem
	I0528 20:57:14.147208 1070943 cli_runner.go:164] Run: docker network inspect addons-885631 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0528 20:57:14.162789 1070943 cli_runner.go:211] docker network inspect addons-885631 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0528 20:57:14.162885 1070943 network_create.go:281] running [docker network inspect addons-885631] to gather additional debugging logs...
	I0528 20:57:14.162907 1070943 cli_runner.go:164] Run: docker network inspect addons-885631
	W0528 20:57:14.176313 1070943 cli_runner.go:211] docker network inspect addons-885631 returned with exit code 1
	I0528 20:57:14.176346 1070943 network_create.go:284] error running [docker network inspect addons-885631]: docker network inspect addons-885631: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-885631 not found
	I0528 20:57:14.176360 1070943 network_create.go:286] output of [docker network inspect addons-885631]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-885631 not found
	
	** /stderr **
	I0528 20:57:14.176464 1070943 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 20:57:14.190507 1070943 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c240d0}
	I0528 20:57:14.190547 1070943 network_create.go:124] attempt to create docker network addons-885631 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0528 20:57:14.190609 1070943 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-885631 addons-885631
	I0528 20:57:14.255217 1070943 network_create.go:108] docker network addons-885631 192.168.49.0/24 created
	I0528 20:57:14.255253 1070943 kic.go:121] calculated static IP "192.168.49.2" for the "addons-885631" container
	I0528 20:57:14.255330 1070943 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0528 20:57:14.269526 1070943 cli_runner.go:164] Run: docker volume create addons-885631 --label name.minikube.sigs.k8s.io=addons-885631 --label created_by.minikube.sigs.k8s.io=true
	I0528 20:57:14.285321 1070943 oci.go:103] Successfully created a docker volume addons-885631
	I0528 20:57:14.285421 1070943 cli_runner.go:164] Run: docker run --rm --name addons-885631-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885631 --entrypoint /usr/bin/test -v addons-885631:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -d /var/lib
	I0528 20:57:16.273756 1070943 cli_runner.go:217] Completed: docker run --rm --name addons-885631-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885631 --entrypoint /usr/bin/test -v addons-885631:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -d /var/lib: (1.98829747s)
	I0528 20:57:16.273787 1070943 oci.go:107] Successfully prepared a docker volume addons-885631
	I0528 20:57:16.273822 1070943 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 20:57:16.273845 1070943 kic.go:194] Starting extracting preloaded images to volume ...
	I0528 20:57:16.273933 1070943 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-885631:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -I lz4 -xf /preloaded.tar -C /extractDir
	I0528 20:57:20.189659 1070943 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-885631:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -I lz4 -xf /preloaded.tar -C /extractDir: (3.915659896s)
	I0528 20:57:20.189688 1070943 kic.go:203] duration metric: took 3.9158396s to extract preloaded images to volume ...
	W0528 20:57:20.189825 1070943 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0528 20:57:20.189942 1070943 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0528 20:57:20.240698 1070943 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-885631 --name addons-885631 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-885631 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-885631 --network addons-885631 --ip 192.168.49.2 --volume addons-885631:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862
	I0528 20:57:20.558505 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Running}}
	I0528 20:57:20.579069 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:20.605301 1070943 cli_runner.go:164] Run: docker exec addons-885631 stat /var/lib/dpkg/alternatives/iptables
	I0528 20:57:20.664547 1070943 oci.go:144] the created container "addons-885631" has a running status.
	I0528 20:57:20.664583 1070943 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa...
	I0528 20:57:20.962398 1070943 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0528 20:57:20.985962 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:21.009819 1070943 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0528 20:57:21.009839 1070943 kic_runner.go:114] Args: [docker exec --privileged addons-885631 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0528 20:57:21.068877 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:21.095339 1070943 machine.go:94] provisionDockerMachine start ...
	I0528 20:57:21.095441 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:21.123728 1070943 main.go:141] libmachine: Using SSH client type: native
	I0528 20:57:21.124002 1070943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33928 <nil> <nil>}
	I0528 20:57:21.124011 1070943 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 20:57:21.297789 1070943 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-885631
	
	I0528 20:57:21.297810 1070943 ubuntu.go:169] provisioning hostname "addons-885631"
	I0528 20:57:21.297879 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:21.315664 1070943 main.go:141] libmachine: Using SSH client type: native
	I0528 20:57:21.315913 1070943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33928 <nil> <nil>}
	I0528 20:57:21.315927 1070943 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-885631 && echo "addons-885631" | sudo tee /etc/hostname
	I0528 20:57:21.472693 1070943 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-885631
	
	I0528 20:57:21.472857 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:21.492802 1070943 main.go:141] libmachine: Using SSH client type: native
	I0528 20:57:21.493057 1070943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33928 <nil> <nil>}
	I0528 20:57:21.493074 1070943 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-885631' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-885631/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-885631' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 20:57:21.626107 1070943 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 20:57:21.626139 1070943 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18966-1064873/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-1064873/.minikube}
	I0528 20:57:21.626158 1070943 ubuntu.go:177] setting up certificates
	I0528 20:57:21.626168 1070943 provision.go:84] configureAuth start
	I0528 20:57:21.626243 1070943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885631
	I0528 20:57:21.642524 1070943 provision.go:143] copyHostCerts
	I0528 20:57:21.642607 1070943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.pem (1078 bytes)
	I0528 20:57:21.642734 1070943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/cert.pem (1123 bytes)
	I0528 20:57:21.642806 1070943 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/key.pem (1679 bytes)
	I0528 20:57:21.642871 1070943 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem org=jenkins.addons-885631 san=[127.0.0.1 192.168.49.2 addons-885631 localhost minikube]
	I0528 20:57:22.031533 1070943 provision.go:177] copyRemoteCerts
	I0528 20:57:22.031640 1070943 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 20:57:22.031697 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:22.048088 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:22.142895 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 20:57:22.168016 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0528 20:57:22.191988 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 20:57:22.215834 1070943 provision.go:87] duration metric: took 589.651645ms to configureAuth
	I0528 20:57:22.215901 1070943 ubuntu.go:193] setting minikube options for container-runtime
	I0528 20:57:22.216099 1070943 config.go:182] Loaded profile config "addons-885631": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 20:57:22.216158 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:22.231549 1070943 main.go:141] libmachine: Using SSH client type: native
	I0528 20:57:22.231789 1070943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33928 <nil> <nil>}
	I0528 20:57:22.231808 1070943 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 20:57:22.354552 1070943 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0528 20:57:22.354573 1070943 ubuntu.go:71] root file system type: overlay
	I0528 20:57:22.354702 1070943 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 20:57:22.354776 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:22.371649 1070943 main.go:141] libmachine: Using SSH client type: native
	I0528 20:57:22.371924 1070943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33928 <nil> <nil>}
	I0528 20:57:22.372000 1070943 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 20:57:22.505560 1070943 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 20:57:22.505649 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:22.524302 1070943 main.go:141] libmachine: Using SSH client type: native
	I0528 20:57:22.524536 1070943 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33928 <nil> <nil>}
	I0528 20:57:22.524558 1070943 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 20:57:23.258274 1070943 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-16 08:38:08.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-28 20:57:22.499101140 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0528 20:57:23.258318 1070943 machine.go:97] duration metric: took 2.162959319s to provisionDockerMachine
	I0528 20:57:23.258367 1070943 client.go:171] duration metric: took 9.777261816s to LocalClient.Create
	I0528 20:57:23.258393 1070943 start.go:167] duration metric: took 9.777384865s to libmachine.API.Create "addons-885631"
	I0528 20:57:23.258405 1070943 start.go:293] postStartSetup for "addons-885631" (driver="docker")
	I0528 20:57:23.258432 1070943 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 20:57:23.258520 1070943 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 20:57:23.258581 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:23.275202 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:23.367975 1070943 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 20:57:23.371326 1070943 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0528 20:57:23.371369 1070943 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0528 20:57:23.371380 1070943 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0528 20:57:23.371387 1070943 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0528 20:57:23.371402 1070943 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1064873/.minikube/addons for local assets ...
	I0528 20:57:23.371482 1070943 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1064873/.minikube/files for local assets ...
	I0528 20:57:23.371510 1070943 start.go:296] duration metric: took 113.098871ms for postStartSetup
	I0528 20:57:23.371827 1070943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885631
	I0528 20:57:23.387342 1070943 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/config.json ...
	I0528 20:57:23.387630 1070943 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 20:57:23.387683 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:23.403543 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:23.486568 1070943 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0528 20:57:23.490729 1070943 start.go:128] duration metric: took 10.015082935s to createHost
	I0528 20:57:23.490757 1070943 start.go:83] releasing machines lock for "addons-885631", held for 10.015245631s
	I0528 20:57:23.490831 1070943 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-885631
	I0528 20:57:23.513239 1070943 ssh_runner.go:195] Run: cat /version.json
	I0528 20:57:23.513292 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:23.513291 1070943 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 20:57:23.513333 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:23.540592 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:23.541991 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:23.629455 1070943 ssh_runner.go:195] Run: systemctl --version
	I0528 20:57:23.744324 1070943 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 20:57:23.748665 1070943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0528 20:57:23.774069 1070943 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0528 20:57:23.774201 1070943 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 20:57:23.803830 1070943 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0528 20:57:23.803858 1070943 start.go:494] detecting cgroup driver to use...
	I0528 20:57:23.803924 1070943 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 20:57:23.804041 1070943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:57:23.820452 1070943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 20:57:23.830228 1070943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 20:57:23.840033 1070943 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 20:57:23.840128 1070943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 20:57:23.849940 1070943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 20:57:23.859771 1070943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 20:57:23.869745 1070943 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 20:57:23.879692 1070943 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 20:57:23.888989 1070943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 20:57:23.899530 1070943 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 20:57:23.909071 1070943 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 20:57:23.918908 1070943 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 20:57:23.927561 1070943 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 20:57:23.935893 1070943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:57:24.016896 1070943 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 20:57:24.129839 1070943 start.go:494] detecting cgroup driver to use...
	I0528 20:57:24.129890 1070943 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 20:57:24.129952 1070943 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 20:57:24.149506 1070943 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0528 20:57:24.149589 1070943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 20:57:24.165570 1070943 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 20:57:24.184816 1070943 ssh_runner.go:195] Run: which cri-dockerd
	I0528 20:57:24.188712 1070943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 20:57:24.198659 1070943 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 20:57:24.220451 1070943 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 20:57:24.335757 1070943 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 20:57:24.440145 1070943 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 20:57:24.440282 1070943 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 20:57:24.461687 1070943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:57:24.554262 1070943 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 20:57:24.825894 1070943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 20:57:24.841156 1070943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 20:57:24.856134 1070943 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 20:57:24.958220 1070943 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 20:57:25.044965 1070943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:57:25.134404 1070943 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 20:57:25.149063 1070943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 20:57:25.161855 1070943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:57:25.245347 1070943 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 20:57:25.310290 1070943 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 20:57:25.310374 1070943 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 20:57:25.314465 1070943 start.go:562] Will wait 60s for crictl version
	I0528 20:57:25.314529 1070943 ssh_runner.go:195] Run: which crictl
	I0528 20:57:25.318044 1070943 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 20:57:25.357291 1070943 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0528 20:57:25.357361 1070943 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 20:57:25.377712 1070943 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 20:57:25.407968 1070943 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0528 20:57:25.408096 1070943 cli_runner.go:164] Run: docker network inspect addons-885631 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 20:57:25.422365 1070943 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0528 20:57:25.425987 1070943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:57:25.436777 1070943 kubeadm.go:877] updating cluster {Name:addons-885631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-885631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 20:57:25.436900 1070943 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 20:57:25.436963 1070943 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 20:57:25.452963 1070943 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0528 20:57:25.452989 1070943 docker.go:615] Images already preloaded, skipping extraction
	I0528 20:57:25.453050 1070943 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 20:57:25.469090 1070943 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0528 20:57:25.469112 1070943 cache_images.go:84] Images are preloaded, skipping loading
	I0528 20:57:25.469130 1070943 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0528 20:57:25.469222 1070943 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-885631 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-885631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 20:57:25.469289 1070943 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0528 20:57:25.516076 1070943 cni.go:84] Creating CNI manager for ""
	I0528 20:57:25.516103 1070943 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 20:57:25.516121 1070943 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 20:57:25.516142 1070943 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-885631 NodeName:addons-885631 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 20:57:25.516299 1070943 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-885631"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 20:57:25.516365 1070943 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 20:57:25.524897 1070943 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 20:57:25.524970 1070943 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 20:57:25.533381 1070943 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0528 20:57:25.550875 1070943 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 20:57:25.568273 1070943 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0528 20:57:25.585316 1070943 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0528 20:57:25.588631 1070943 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 20:57:25.598907 1070943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:57:25.689945 1070943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:57:25.705066 1070943 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631 for IP: 192.168.49.2
	I0528 20:57:25.705087 1070943 certs.go:194] generating shared ca certs ...
	I0528 20:57:25.705103 1070943 certs.go:226] acquiring lock for ca certs: {Name:mk5cb73d5e2c9c3b65010257baa77ed890ffd0a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:25.705230 1070943 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.key
	I0528 20:57:26.461537 1070943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.crt ...
	I0528 20:57:26.461568 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.crt: {Name:mk0582f1d96e8b2fad3273d5eb9b67165dc351e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:26.461749 1070943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.key ...
	I0528 20:57:26.461768 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.key: {Name:mkb13e6594b7ca3590ee91f14999472d1423532e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:26.461850 1070943 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.key
	I0528 20:57:26.676107 1070943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.crt ...
	I0528 20:57:26.676136 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.crt: {Name:mk7c67bf3a010c80df656a0878b6ad6728f00fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:26.676328 1070943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.key ...
	I0528 20:57:26.676341 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.key: {Name:mk1ae9f9ac9665bc300fe2fe38593208a0592b38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:26.676938 1070943 certs.go:256] generating profile certs ...
	I0528 20:57:26.677007 1070943 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.key
	I0528 20:57:26.677027 1070943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt with IP's: []
	I0528 20:57:27.344068 1070943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt ...
	I0528 20:57:27.344102 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: {Name:mk1bf8de0ce1fb124645d0650d3b5cb8f978ab44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:27.344308 1070943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.key ...
	I0528 20:57:27.344322 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.key: {Name:mk45a00123daac532070f9577c0c907420c5a403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:27.344418 1070943 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.key.d9e8954d
	I0528 20:57:27.344441 1070943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.crt.d9e8954d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0528 20:57:28.013474 1070943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.crt.d9e8954d ...
	I0528 20:57:28.013556 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.crt.d9e8954d: {Name:mk53f2851314f2ad7edfd0bffb6bf18493f75770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:28.013816 1070943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.key.d9e8954d ...
	I0528 20:57:28.013857 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.key.d9e8954d: {Name:mk2098e8a776f8d0a45052efdbec0a0206bd31e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:28.014004 1070943 certs.go:381] copying /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.crt.d9e8954d -> /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.crt
	I0528 20:57:28.014166 1070943 certs.go:385] copying /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.key.d9e8954d -> /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.key
	I0528 20:57:28.014256 1070943 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/proxy-client.key
	I0528 20:57:28.014312 1070943 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/proxy-client.crt with IP's: []
	I0528 20:57:28.585049 1070943 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/proxy-client.crt ...
	I0528 20:57:28.585082 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/proxy-client.crt: {Name:mk36fddb19b84d6aa24aee05f41507b533880388 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:28.585278 1070943 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/proxy-client.key ...
	I0528 20:57:28.585292 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/proxy-client.key: {Name:mkf29430293d7c9a625d78c849eee82e8a3b7cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:28.585484 1070943 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 20:57:28.585527 1070943 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem (1078 bytes)
	I0528 20:57:28.585559 1070943 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem (1123 bytes)
	I0528 20:57:28.585588 1070943 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem (1679 bytes)
	I0528 20:57:28.586228 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 20:57:28.609567 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 20:57:28.632553 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 20:57:28.656537 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 20:57:28.680464 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0528 20:57:28.703886 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 20:57:28.734265 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 20:57:28.760123 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 20:57:28.788811 1070943 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 20:57:28.812636 1070943 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 20:57:28.830228 1070943 ssh_runner.go:195] Run: openssl version
	I0528 20:57:28.835767 1070943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 20:57:28.845057 1070943 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:57:28.848477 1070943 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:57 /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:57:28.848539 1070943 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 20:57:28.855258 1070943 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 20:57:28.864573 1070943 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 20:57:28.867639 1070943 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 20:57:28.867685 1070943 kubeadm.go:391] StartCluster: {Name:addons-885631 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-885631 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:57:28.867811 1070943 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 20:57:28.882632 1070943 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 20:57:28.891512 1070943 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 20:57:28.900308 1070943 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0528 20:57:28.900377 1070943 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 20:57:28.909114 1070943 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 20:57:28.909135 1070943 kubeadm.go:156] found existing configuration files:
	
	I0528 20:57:28.909204 1070943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 20:57:28.917855 1070943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 20:57:28.917932 1070943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 20:57:28.926577 1070943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 20:57:28.934871 1070943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 20:57:28.934938 1070943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 20:57:28.943739 1070943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 20:57:28.952448 1070943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 20:57:28.952538 1070943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 20:57:28.960806 1070943 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 20:57:28.969426 1070943 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 20:57:28.969496 1070943 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 20:57:28.977977 1070943 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0528 20:57:29.026583 1070943 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 20:57:29.026794 1070943 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 20:57:29.065849 1070943 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0528 20:57:29.065963 1070943 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1062-aws
	I0528 20:57:29.066056 1070943 kubeadm.go:309] OS: Linux
	I0528 20:57:29.066124 1070943 kubeadm.go:309] CGROUPS_CPU: enabled
	I0528 20:57:29.066189 1070943 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0528 20:57:29.066239 1070943 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0528 20:57:29.066290 1070943 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0528 20:57:29.066339 1070943 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0528 20:57:29.066393 1070943 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0528 20:57:29.066441 1070943 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0528 20:57:29.066491 1070943 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0528 20:57:29.066539 1070943 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0528 20:57:29.130188 1070943 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 20:57:29.130359 1070943 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 20:57:29.130493 1070943 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 20:57:29.372327 1070943 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 20:57:29.376977 1070943 out.go:204]   - Generating certificates and keys ...
	I0528 20:57:29.377128 1070943 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 20:57:29.377225 1070943 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 20:57:29.615801 1070943 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 20:57:30.019099 1070943 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 20:57:30.228163 1070943 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 20:57:30.420534 1070943 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 20:57:30.986354 1070943 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 20:57:30.986493 1070943 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-885631 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0528 20:57:31.568626 1070943 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 20:57:31.568922 1070943 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-885631 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0528 20:57:31.938121 1070943 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 20:57:32.409709 1070943 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 20:57:32.869034 1070943 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 20:57:32.869338 1070943 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 20:57:33.355997 1070943 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 20:57:33.606367 1070943 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 20:57:33.806594 1070943 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 20:57:34.405102 1070943 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 20:57:34.803720 1070943 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 20:57:34.804793 1070943 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 20:57:34.807924 1070943 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 20:57:34.810703 1070943 out.go:204]   - Booting up control plane ...
	I0528 20:57:34.810824 1070943 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 20:57:34.810934 1070943 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 20:57:34.813044 1070943 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 20:57:34.826434 1070943 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 20:57:34.827581 1070943 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 20:57:34.827643 1070943 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 20:57:34.937823 1070943 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 20:57:34.937909 1070943 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 20:57:36.939427 1070943 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 2.001899577s
	I0528 20:57:36.939517 1070943 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 20:57:43.441720 1070943 kubeadm.go:309] [api-check] The API server is healthy after 6.502325833s
	I0528 20:57:43.461634 1070943 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 20:57:43.472976 1070943 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 20:57:43.499511 1070943 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 20:57:43.499726 1070943 kubeadm.go:309] [mark-control-plane] Marking the node addons-885631 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 20:57:43.510882 1070943 kubeadm.go:309] [bootstrap-token] Using token: s725ac.73sgxk3smqem6ty3
	I0528 20:57:43.513111 1070943 out.go:204]   - Configuring RBAC rules ...
	I0528 20:57:43.513234 1070943 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 20:57:43.517803 1070943 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 20:57:43.525552 1070943 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 20:57:43.530431 1070943 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 20:57:43.534113 1070943 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 20:57:43.537869 1070943 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 20:57:43.849117 1070943 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 20:57:44.298097 1070943 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 20:57:44.849746 1070943 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 20:57:44.850921 1070943 kubeadm.go:309] 
	I0528 20:57:44.850991 1070943 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 20:57:44.850997 1070943 kubeadm.go:309] 
	I0528 20:57:44.851072 1070943 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 20:57:44.851076 1070943 kubeadm.go:309] 
	I0528 20:57:44.851101 1070943 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 20:57:44.851158 1070943 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 20:57:44.851223 1070943 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 20:57:44.851228 1070943 kubeadm.go:309] 
	I0528 20:57:44.851280 1070943 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 20:57:44.851284 1070943 kubeadm.go:309] 
	I0528 20:57:44.851330 1070943 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 20:57:44.851334 1070943 kubeadm.go:309] 
	I0528 20:57:44.851384 1070943 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 20:57:44.851457 1070943 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 20:57:44.851522 1070943 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 20:57:44.851527 1070943 kubeadm.go:309] 
	I0528 20:57:44.851607 1070943 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 20:57:44.851681 1070943 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 20:57:44.851686 1070943 kubeadm.go:309] 
	I0528 20:57:44.851766 1070943 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token s725ac.73sgxk3smqem6ty3 \
	I0528 20:57:44.851865 1070943 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c97b1399726f8cdd7302e82f74f094a89f23c332ff3aba8bc1ca69a66ac31365 \
	I0528 20:57:44.851885 1070943 kubeadm.go:309] 	--control-plane 
	I0528 20:57:44.851889 1070943 kubeadm.go:309] 
	I0528 20:57:44.851970 1070943 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 20:57:44.851975 1070943 kubeadm.go:309] 
	I0528 20:57:44.852054 1070943 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token s725ac.73sgxk3smqem6ty3 \
	I0528 20:57:44.852152 1070943 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c97b1399726f8cdd7302e82f74f094a89f23c332ff3aba8bc1ca69a66ac31365 
	I0528 20:57:44.854422 1070943 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1062-aws\n", err: exit status 1
	I0528 20:57:44.854535 1070943 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 20:57:44.854560 1070943 cni.go:84] Creating CNI manager for ""
	I0528 20:57:44.854578 1070943 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 20:57:44.858063 1070943 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 20:57:44.860054 1070943 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 20:57:44.868638 1070943 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 20:57:44.887188 1070943 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 20:57:44.887327 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-885631 minikube.k8s.io/updated_at=2024_05_28T20_57_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=addons-885631 minikube.k8s.io/primary=true
	I0528 20:57:44.887329 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:45.035401 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:45.035465 1070943 ops.go:34] apiserver oom_adj: -16
	I0528 20:57:45.535505 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:46.035661 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:46.536447 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:47.036493 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:47.535832 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:48.035930 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:48.536406 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:49.036091 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:49.536168 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:50.036022 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:50.536028 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:51.036044 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:51.535933 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:52.035743 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:52.535985 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:53.036285 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:53.536263 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:54.036160 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:54.535793 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:55.036499 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:55.535443 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:56.035568 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:56.535493 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:57.036361 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:57.536078 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:58.035519 1070943 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 20:57:58.191325 1070943 kubeadm.go:1107] duration metric: took 13.304060251s to wait for elevateKubeSystemPrivileges
	W0528 20:57:58.191357 1070943 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 20:57:58.191366 1070943 kubeadm.go:393] duration metric: took 29.323683003s to StartCluster
	I0528 20:57:58.191382 1070943 settings.go:142] acquiring lock: {Name:mk9dd4e0f1e49f25e638e0ae0a582e344ec1255d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:58.191485 1070943 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 20:57:58.191883 1070943 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/kubeconfig: {Name:mk43b4b38c110ff2ffbd3a6de61be9ad6b977a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 20:57:58.192061 1070943 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 20:57:58.194193 1070943 out.go:177] * Verifying Kubernetes components...
	I0528 20:57:58.192194 1070943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 20:57:58.192382 1070943 config.go:182] Loaded profile config "addons-885631": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 20:57:58.192392 1070943 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0528 20:57:58.195797 1070943 addons.go:69] Setting yakd=true in profile "addons-885631"
	I0528 20:57:58.195832 1070943 addons.go:234] Setting addon yakd=true in "addons-885631"
	I0528 20:57:58.195864 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.196339 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.196524 1070943 addons.go:69] Setting ingress-dns=true in profile "addons-885631"
	I0528 20:57:58.196547 1070943 addons.go:234] Setting addon ingress-dns=true in "addons-885631"
	I0528 20:57:58.196587 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.196952 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.197266 1070943 addons.go:69] Setting inspektor-gadget=true in profile "addons-885631"
	I0528 20:57:58.197303 1070943 addons.go:234] Setting addon inspektor-gadget=true in "addons-885631"
	I0528 20:57:58.197333 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.197444 1070943 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 20:57:58.197716 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.197828 1070943 addons.go:69] Setting cloud-spanner=true in profile "addons-885631"
	I0528 20:57:58.197850 1070943 addons.go:234] Setting addon cloud-spanner=true in "addons-885631"
	I0528 20:57:58.197877 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.198348 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.201832 1070943 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-885631"
	I0528 20:57:58.201904 1070943 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-885631"
	I0528 20:57:58.201944 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.201992 1070943 addons.go:69] Setting metrics-server=true in profile "addons-885631"
	I0528 20:57:58.202397 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.205209 1070943 addons.go:234] Setting addon metrics-server=true in "addons-885631"
	I0528 20:57:58.205259 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.205681 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.207870 1070943 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-885631"
	I0528 20:57:58.207908 1070943 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-885631"
	I0528 20:57:58.207948 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.208375 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.219742 1070943 addons.go:69] Setting default-storageclass=true in profile "addons-885631"
	I0528 20:57:58.219835 1070943 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-885631"
	I0528 20:57:58.220183 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.222072 1070943 addons.go:69] Setting registry=true in profile "addons-885631"
	I0528 20:57:58.222112 1070943 addons.go:234] Setting addon registry=true in "addons-885631"
	I0528 20:57:58.222149 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.222565 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.240837 1070943 addons.go:69] Setting gcp-auth=true in profile "addons-885631"
	I0528 20:57:58.240893 1070943 mustload.go:65] Loading cluster: addons-885631
	I0528 20:57:58.241073 1070943 config.go:182] Loaded profile config "addons-885631": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 20:57:58.241316 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.255979 1070943 addons.go:69] Setting storage-provisioner=true in profile "addons-885631"
	I0528 20:57:58.256082 1070943 addons.go:234] Setting addon storage-provisioner=true in "addons-885631"
	I0528 20:57:58.256153 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.256686 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.256898 1070943 addons.go:69] Setting ingress=true in profile "addons-885631"
	I0528 20:57:58.256922 1070943 addons.go:234] Setting addon ingress=true in "addons-885631"
	I0528 20:57:58.256965 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.257324 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.270227 1070943 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-885631"
	I0528 20:57:58.270309 1070943 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-885631"
	I0528 20:57:58.270647 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.286746 1070943 addons.go:69] Setting volcano=true in profile "addons-885631"
	I0528 20:57:58.286786 1070943 addons.go:234] Setting addon volcano=true in "addons-885631"
	I0528 20:57:58.286832 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.287271 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.301552 1070943 addons.go:69] Setting volumesnapshots=true in profile "addons-885631"
	I0528 20:57:58.301655 1070943 addons.go:234] Setting addon volumesnapshots=true in "addons-885631"
	I0528 20:57:58.301709 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.302825 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.311821 1070943 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0528 20:57:58.318398 1070943 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0528 20:57:58.318483 1070943 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0528 20:57:58.318593 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.356726 1070943 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.1
	I0528 20:57:58.362475 1070943 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0528 20:57:58.362542 1070943 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0528 20:57:58.362647 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.328187 1070943 addons.go:234] Setting addon default-storageclass=true in "addons-885631"
	I0528 20:57:58.403717 1070943 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0528 20:57:58.402156 1070943 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0528 20:57:58.402165 1070943 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0528 20:57:58.402169 1070943 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0528 20:57:58.402173 1070943 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0528 20:57:58.402210 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.402955 1070943 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-885631"
	I0528 20:57:58.408883 1070943 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 20:57:58.409304 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.418282 1070943 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 20:57:58.418339 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.418642 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:57:58.430895 1070943 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0528 20:57:58.430906 1070943 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0528 20:57:58.430925 1070943 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 20:57:58.430938 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0528 20:57:58.440664 1070943 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 20:57:58.439024 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:57:58.439138 1070943 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 20:57:58.439210 1070943 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0528 20:57:58.440894 1070943 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 20:57:58.440948 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.446788 1070943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 20:57:58.446994 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.447089 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0528 20:57:58.447349 1070943 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0528 20:57:58.447361 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0528 20:57:58.450488 1070943 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0528 20:57:58.450560 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.460156 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 20:57:58.463277 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.463286 1070943 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.7.0
	I0528 20:57:58.463337 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.464418 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.469879 1070943 out.go:177]   - Using image docker.io/registry:2.8.3
	I0528 20:57:58.470094 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.470118 1070943 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0528 20:57:58.475932 1070943 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0528 20:57:58.493208 1070943 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0528 20:57:58.493184 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.511115 1070943 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 20:57:58.511137 1070943 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 20:57:58.511192 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.524553 1070943 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0528 20:57:58.526685 1070943 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0528 20:57:58.529843 1070943 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0528 20:57:58.527730 1070943 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.7.0
	I0528 20:57:58.527852 1070943 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0528 20:57:58.552309 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0528 20:57:58.552458 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.571496 1070943 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.7.0
	I0528 20:57:58.567237 1070943 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 20:57:58.589170 1070943 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 20:57:58.589243 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0528 20:57:58.589346 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.588757 1070943 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0528 20:57:58.619628 1070943 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0528 20:57:58.624207 1070943 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0528 20:57:58.624301 1070943 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0528 20:57:58.624470 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.639195 1070943 out.go:177]   - Using image docker.io/busybox:stable
	I0528 20:57:58.641156 1070943 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0528 20:57:58.643111 1070943 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 20:57:58.643132 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0528 20:57:58.643253 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.661415 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.663659 1070943 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 20:57:58.663990 1070943 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 20:57:58.696185 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.703087 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.719527 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.722160 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.730483 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.736623 1070943 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0528 20:57:58.737843 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (626760 bytes)
	I0528 20:57:58.737993 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:57:58.738790 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.758998 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.764867 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	W0528 20:57:58.772673 1070943 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0528 20:57:58.772715 1070943 retry.go:31] will retry after 188.333896ms: ssh: handshake failed: EOF
	I0528 20:57:58.779124 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.791854 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.794131 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:57:58.988856 1070943 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0528 20:57:58.988877 1070943 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0528 20:57:59.005479 1070943 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0528 20:57:59.005516 1070943 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0528 20:57:59.086609 1070943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 20:57:59.086635 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0528 20:57:59.155947 1070943 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0528 20:57:59.155976 1070943 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0528 20:57:59.160922 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0528 20:57:59.182924 1070943 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0528 20:57:59.182951 1070943 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0528 20:57:59.186324 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0528 20:57:59.189555 1070943 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0528 20:57:59.189579 1070943 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0528 20:57:59.211850 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0528 20:57:59.215721 1070943 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0528 20:57:59.215750 1070943 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0528 20:57:59.230850 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 20:57:59.232127 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 20:57:59.237236 1070943 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0528 20:57:59.237261 1070943 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0528 20:57:59.286866 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0528 20:57:59.292374 1070943 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0528 20:57:59.292400 1070943 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0528 20:57:59.294640 1070943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 20:57:59.294664 1070943 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 20:57:59.303723 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0528 20:57:59.372131 1070943 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0528 20:57:59.372155 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0528 20:57:59.390879 1070943 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0528 20:57:59.390915 1070943 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0528 20:57:59.415021 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0528 20:57:59.424768 1070943 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0528 20:57:59.424799 1070943 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0528 20:57:59.436959 1070943 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0528 20:57:59.436990 1070943 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0528 20:57:59.480394 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0528 20:57:59.577321 1070943 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0528 20:57:59.577348 1070943 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0528 20:57:59.582608 1070943 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 20:57:59.582632 1070943 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 20:57:59.608241 1070943 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0528 20:57:59.608274 1070943 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0528 20:57:59.651282 1070943 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0528 20:57:59.651315 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0528 20:57:59.734508 1070943 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0528 20:57:59.734535 1070943 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0528 20:57:59.828438 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 20:57:59.831018 1070943 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0528 20:57:59.831043 1070943 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0528 20:57:59.838941 1070943 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0528 20:57:59.838969 1070943 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0528 20:57:59.844746 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0528 20:58:00.205824 1070943 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0528 20:58:00.205857 1070943 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0528 20:58:00.529008 1070943 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0528 20:58:00.529037 1070943 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0528 20:58:00.566480 1070943 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.902430243s)
	I0528 20:58:00.567393 1070943 node_ready.go:35] waiting up to 6m0s for node "addons-885631" to be "Ready" ...
	I0528 20:58:00.567606 1070943 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.903921718s)
	I0528 20:58:00.567628 1070943 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0528 20:58:00.573437 1070943 node_ready.go:49] node "addons-885631" has status "Ready":"True"
	I0528 20:58:00.573520 1070943 node_ready.go:38] duration metric: took 6.097882ms for node "addons-885631" to be "Ready" ...
	I0528 20:58:00.573546 1070943 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:58:00.585867 1070943 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:00.681207 1070943 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 20:58:00.681234 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0528 20:58:00.750599 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 20:58:01.071531 1070943 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-885631" context rescaled to 1 replicas
	I0528 20:58:01.102232 1070943 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0528 20:58:01.102261 1070943 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0528 20:58:01.318393 1070943 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0528 20:58:01.318419 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0528 20:58:01.486903 1070943 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 20:58:01.486925 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0528 20:58:01.568347 1070943 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0528 20:58:01.568377 1070943 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0528 20:58:01.759611 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0528 20:58:02.191701 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.005341762s)
	I0528 20:58:02.191806 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.979932076s)
	I0528 20:58:02.191871 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.961003804s)
	I0528 20:58:02.192160 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.031212475s)
	I0528 20:58:02.264595 1070943 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0528 20:58:02.264659 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0528 20:58:02.726533 1070943 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0528 20:58:02.726598 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0528 20:58:02.803265 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:02.971954 1070943 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 20:58:02.972042 1070943 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0528 20:58:03.206845 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0528 20:58:03.601385 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.369211849s)
	I0528 20:58:05.095207 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:05.450135 1070943 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0528 20:58:05.450346 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:58:05.486208 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:58:07.016074 1070943 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0528 20:58:07.404720 1070943 addons.go:234] Setting addon gcp-auth=true in "addons-885631"
	I0528 20:58:07.404787 1070943 host.go:66] Checking if "addons-885631" exists ...
	I0528 20:58:07.405239 1070943 cli_runner.go:164] Run: docker container inspect addons-885631 --format={{.State.Status}}
	I0528 20:58:07.426212 1070943 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0528 20:58:07.426265 1070943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-885631
	I0528 20:58:07.462267 1070943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33928 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/addons-885631/id_rsa Username:docker}
	I0528 20:58:07.591922 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:09.619627 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:11.050114 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.746354018s)
	I0528 20:58:11.050157 1070943 addons.go:475] Verifying addon ingress=true in "addons-885631"
	I0528 20:58:11.052467 1070943 out.go:177] * Verifying ingress addon...
	I0528 20:58:11.050363 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.763469123s)
	I0528 20:58:11.050444 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.635398557s)
	I0528 20:58:11.050479 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.570054271s)
	I0528 20:58:11.050530 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.222068126s)
	I0528 20:58:11.050565 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (11.205794617s)
	I0528 20:58:11.050649 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.300023192s)
	I0528 20:58:11.050728 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.291041609s)
	I0528 20:58:11.055948 1070943 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0528 20:58:11.052596 1070943 addons.go:475] Verifying addon registry=true in "addons-885631"
	I0528 20:58:11.052824 1070943 addons.go:475] Verifying addon metrics-server=true in "addons-885631"
	W0528 20:58:11.052840 1070943 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 20:58:11.058614 1070943 out.go:177] * Verifying registry addon...
	I0528 20:58:11.058622 1070943 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-885631 service yakd-dashboard -n yakd-dashboard
	
	I0528 20:58:11.058707 1070943 retry.go:31] will retry after 175.189577ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0528 20:58:11.061851 1070943 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0528 20:58:11.063499 1070943 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0528 20:58:11.064781 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:11.076104 1070943 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0528 20:58:11.076135 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:11.241795 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0528 20:58:11.471351 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.264324723s)
	I0528 20:58:11.471396 1070943 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-885631"
	I0528 20:58:11.473308 1070943 out.go:177] * Verifying csi-hostpath-driver addon...
	I0528 20:58:11.471532 1070943 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.045298616s)
	I0528 20:58:11.478573 1070943 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0528 20:58:11.476753 1070943 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0528 20:58:11.482743 1070943 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0528 20:58:11.484622 1070943 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0528 20:58:11.484648 1070943 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0528 20:58:11.493631 1070943 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0528 20:58:11.493658 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:11.595134 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:11.596013 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:11.644912 1070943 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0528 20:58:11.644937 1070943 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0528 20:58:11.732593 1070943 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 20:58:11.732667 1070943 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0528 20:58:11.770463 1070943 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0528 20:58:11.988006 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:12.065062 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:12.088232 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:12.095463 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:12.485709 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:12.560537 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:12.570177 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:12.987043 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:13.061262 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:13.070913 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:13.485740 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:13.486009 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.244127423s)
	I0528 20:58:13.486085 1070943 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.715551585s)
	I0528 20:58:13.489225 1070943 addons.go:475] Verifying addon gcp-auth=true in "addons-885631"
	I0528 20:58:13.491458 1070943 out.go:177] * Verifying gcp-auth addon...
	I0528 20:58:13.494471 1070943 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0528 20:58:13.496689 1070943 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0528 20:58:13.560903 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:13.569323 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:13.986769 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:14.062061 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:14.070467 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:14.486190 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:14.561456 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:14.571281 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:14.594041 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:14.987832 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:15.062965 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:15.073419 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:15.488448 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:15.562652 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:15.570243 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:15.986182 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:16.061292 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:16.070352 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:16.486119 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:16.560599 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:16.569724 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:16.988196 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:17.077582 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:17.079876 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:17.096492 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:17.485878 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:17.560996 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:17.569872 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:17.987022 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:18.060820 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:18.069816 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:18.486051 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:18.560610 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:18.569657 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:18.986916 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:19.060258 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:19.069591 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:19.486930 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:19.560510 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:19.569787 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:19.592270 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:19.985751 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:20.060374 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:20.069638 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:20.485566 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:20.560692 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:20.569769 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:20.986401 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:21.061537 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:21.070331 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:21.487617 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:21.561017 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:21.569901 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:21.592843 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:21.986807 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:22.061030 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:22.069639 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:22.486516 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:22.560378 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:22.569490 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:22.987476 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:23.061147 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:23.070055 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:23.487368 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:23.561838 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:23.569663 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:23.594136 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:23.985681 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:24.061552 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:24.071418 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:24.486667 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:24.560680 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:24.569951 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:24.986770 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:25.060831 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:25.069663 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:25.485627 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:25.560307 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:25.569749 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:25.987008 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:26.061425 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:26.070126 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:26.092739 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:26.488913 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:26.560451 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:26.569800 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:26.986282 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:27.061001 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:27.069492 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:27.486874 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:27.561595 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:27.572821 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:27.996320 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:28.064853 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:28.073528 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:28.493332 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:28.561081 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:28.569664 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:28.592999 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:28.986416 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:29.061430 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:29.069750 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:29.486773 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:29.560103 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:29.569890 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:29.986585 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:30.075797 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:30.076746 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:30.486607 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:30.560763 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:30.570311 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:30.986785 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:31.061029 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:31.070291 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:31.093666 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:31.486770 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:31.561057 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:31.571394 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:31.988465 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:32.061999 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:32.070811 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0528 20:58:32.488369 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:32.561000 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:32.569851 1070943 kapi.go:107] duration metric: took 21.507990525s to wait for kubernetes.io/minikube-addons=registry ...
	I0528 20:58:32.986140 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:33.061528 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:33.485799 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:33.560892 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:33.592354 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:33.987261 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:34.066957 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:34.487142 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:34.563791 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:34.986214 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:35.061580 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:35.486111 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:35.560498 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:35.986496 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:36.061569 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:36.093647 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:36.487245 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:36.560734 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:36.986244 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:37.062108 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:37.491474 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:37.562805 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:37.986664 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:38.061546 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:38.103541 1070943 pod_ready.go:102] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"False"
	I0528 20:58:38.488036 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:38.566499 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:38.986456 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:39.063406 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:39.486052 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:39.563234 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:39.985961 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:40.061010 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:40.093405 1070943 pod_ready.go:92] pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace has status "Ready":"True"
	I0528 20:58:40.093435 1070943 pod_ready.go:81] duration metric: took 39.507461299s for pod "coredns-7db6d8ff4d-km5m8" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.093449 1070943 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qg6cg" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.095822 1070943 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-qg6cg" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-qg6cg" not found
	I0528 20:58:40.095853 1070943 pod_ready.go:81] duration metric: took 2.395619ms for pod "coredns-7db6d8ff4d-qg6cg" in "kube-system" namespace to be "Ready" ...
	E0528 20:58:40.095865 1070943 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-qg6cg" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-qg6cg" not found
	I0528 20:58:40.095873 1070943 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-885631" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.110976 1070943 pod_ready.go:92] pod "etcd-addons-885631" in "kube-system" namespace has status "Ready":"True"
	I0528 20:58:40.111018 1070943 pod_ready.go:81] duration metric: took 15.129905ms for pod "etcd-addons-885631" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.111032 1070943 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-885631" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.117619 1070943 pod_ready.go:92] pod "kube-apiserver-addons-885631" in "kube-system" namespace has status "Ready":"True"
	I0528 20:58:40.117647 1070943 pod_ready.go:81] duration metric: took 6.595741ms for pod "kube-apiserver-addons-885631" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.117660 1070943 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-885631" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.125964 1070943 pod_ready.go:92] pod "kube-controller-manager-addons-885631" in "kube-system" namespace has status "Ready":"True"
	I0528 20:58:40.125996 1070943 pod_ready.go:81] duration metric: took 8.325308ms for pod "kube-controller-manager-addons-885631" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.126036 1070943 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-94hxg" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.290526 1070943 pod_ready.go:92] pod "kube-proxy-94hxg" in "kube-system" namespace has status "Ready":"True"
	I0528 20:58:40.290554 1070943 pod_ready.go:81] duration metric: took 164.503739ms for pod "kube-proxy-94hxg" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.290568 1070943 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-885631" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.486390 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:40.560935 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:40.690966 1070943 pod_ready.go:92] pod "kube-scheduler-addons-885631" in "kube-system" namespace has status "Ready":"True"
	I0528 20:58:40.690992 1070943 pod_ready.go:81] duration metric: took 400.416374ms for pod "kube-scheduler-addons-885631" in "kube-system" namespace to be "Ready" ...
	I0528 20:58:40.691002 1070943 pod_ready.go:38] duration metric: took 40.117416724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 20:58:40.691022 1070943 api_server.go:52] waiting for apiserver process to appear ...
	I0528 20:58:40.691087 1070943 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 20:58:40.746920 1070943 api_server.go:72] duration metric: took 42.554830736s to wait for apiserver process to appear ...
	I0528 20:58:40.746950 1070943 api_server.go:88] waiting for apiserver healthz status ...
	I0528 20:58:40.746981 1070943 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0528 20:58:40.757283 1070943 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0528 20:58:40.758522 1070943 api_server.go:141] control plane version: v1.30.1
	I0528 20:58:40.758588 1070943 api_server.go:131] duration metric: took 11.618792ms to wait for apiserver health ...
	I0528 20:58:40.758611 1070943 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 20:58:40.899913 1070943 system_pods.go:59] 17 kube-system pods found
	I0528 20:58:40.900023 1070943 system_pods.go:61] "coredns-7db6d8ff4d-km5m8" [8ff9d1ba-c13f-4864-9109-54ca0338ff97] Running
	I0528 20:58:40.900053 1070943 system_pods.go:61] "csi-hostpath-attacher-0" [48b99766-3c42-4918-af72-5d75258528bb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0528 20:58:40.900087 1070943 system_pods.go:61] "csi-hostpath-resizer-0" [2baf58cf-aeaf-43f8-ab9e-2579aacfdb59] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0528 20:58:40.900121 1070943 system_pods.go:61] "csi-hostpathplugin-k2kw5" [2a699f1a-635d-41fb-a88b-03e872226fab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0528 20:58:40.900143 1070943 system_pods.go:61] "etcd-addons-885631" [877d59b7-da0e-4ab7-852a-4663652c68f5] Running
	I0528 20:58:40.900168 1070943 system_pods.go:61] "kube-apiserver-addons-885631" [d7770a61-95df-4fa4-b0a4-330ec3f150ab] Running
	I0528 20:58:40.900202 1070943 system_pods.go:61] "kube-controller-manager-addons-885631" [f945daf7-a163-4bf8-bde7-9f1dd3832e0e] Running
	I0528 20:58:40.900246 1070943 system_pods.go:61] "kube-ingress-dns-minikube" [12cedf5d-9526-482d-a2da-061a8eddc783] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0528 20:58:40.900272 1070943 system_pods.go:61] "kube-proxy-94hxg" [e517e5f2-0536-42fe-9eed-22e8baffb452] Running
	I0528 20:58:40.900293 1070943 system_pods.go:61] "kube-scheduler-addons-885631" [91540077-7a24-407d-9286-4cd57fba422b] Running
	I0528 20:58:40.900331 1070943 system_pods.go:61] "metrics-server-c59844bb4-r5rzf" [8612d4f8-944b-4a38-803c-b72629ba7c6e] Running
	I0528 20:58:40.900358 1070943 system_pods.go:61] "nvidia-device-plugin-daemonset-7cvnl" [5751dee2-aa23-4e6b-9820-920308dcf9b6] Running
	I0528 20:58:40.900381 1070943 system_pods.go:61] "registry-proxy-zjgsw" [7b440740-69b2-460e-8ffa-92e1f22ea088] Running
	I0528 20:58:40.900404 1070943 system_pods.go:61] "registry-zksdm" [1fab4f53-2cf7-4c1f-abcb-b10d106e8a5b] Running
	I0528 20:58:40.900441 1070943 system_pods.go:61] "snapshot-controller-745499f584-k94ns" [44b1b3d7-fc08-49f5-b63b-12636b910c6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:58:40.900469 1070943 system_pods.go:61] "snapshot-controller-745499f584-t94gb" [8328aea4-b041-4226-848f-e23d62e056a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:58:40.900494 1070943 system_pods.go:61] "storage-provisioner" [99085ea8-e2fb-4522-9f8d-d4ccd01c0318] Running
	I0528 20:58:40.900518 1070943 system_pods.go:74] duration metric: took 141.886801ms to wait for pod list to return data ...
	I0528 20:58:40.900552 1070943 default_sa.go:34] waiting for default service account to be created ...
	I0528 20:58:40.986591 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:41.060888 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:41.092415 1070943 default_sa.go:45] found service account: "default"
	I0528 20:58:41.092507 1070943 default_sa.go:55] duration metric: took 191.928343ms for default service account to be created ...
	I0528 20:58:41.092545 1070943 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 20:58:41.297647 1070943 system_pods.go:86] 17 kube-system pods found
	I0528 20:58:41.297746 1070943 system_pods.go:89] "coredns-7db6d8ff4d-km5m8" [8ff9d1ba-c13f-4864-9109-54ca0338ff97] Running
	I0528 20:58:41.297775 1070943 system_pods.go:89] "csi-hostpath-attacher-0" [48b99766-3c42-4918-af72-5d75258528bb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0528 20:58:41.297795 1070943 system_pods.go:89] "csi-hostpath-resizer-0" [2baf58cf-aeaf-43f8-ab9e-2579aacfdb59] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0528 20:58:41.297820 1070943 system_pods.go:89] "csi-hostpathplugin-k2kw5" [2a699f1a-635d-41fb-a88b-03e872226fab] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0528 20:58:41.297828 1070943 system_pods.go:89] "etcd-addons-885631" [877d59b7-da0e-4ab7-852a-4663652c68f5] Running
	I0528 20:58:41.297835 1070943 system_pods.go:89] "kube-apiserver-addons-885631" [d7770a61-95df-4fa4-b0a4-330ec3f150ab] Running
	I0528 20:58:41.297839 1070943 system_pods.go:89] "kube-controller-manager-addons-885631" [f945daf7-a163-4bf8-bde7-9f1dd3832e0e] Running
	I0528 20:58:41.297846 1070943 system_pods.go:89] "kube-ingress-dns-minikube" [12cedf5d-9526-482d-a2da-061a8eddc783] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0528 20:58:41.297850 1070943 system_pods.go:89] "kube-proxy-94hxg" [e517e5f2-0536-42fe-9eed-22e8baffb452] Running
	I0528 20:58:41.297856 1070943 system_pods.go:89] "kube-scheduler-addons-885631" [91540077-7a24-407d-9286-4cd57fba422b] Running
	I0528 20:58:41.297860 1070943 system_pods.go:89] "metrics-server-c59844bb4-r5rzf" [8612d4f8-944b-4a38-803c-b72629ba7c6e] Running
	I0528 20:58:41.297864 1070943 system_pods.go:89] "nvidia-device-plugin-daemonset-7cvnl" [5751dee2-aa23-4e6b-9820-920308dcf9b6] Running
	I0528 20:58:41.297876 1070943 system_pods.go:89] "registry-proxy-zjgsw" [7b440740-69b2-460e-8ffa-92e1f22ea088] Running
	I0528 20:58:41.297881 1070943 system_pods.go:89] "registry-zksdm" [1fab4f53-2cf7-4c1f-abcb-b10d106e8a5b] Running
	I0528 20:58:41.297887 1070943 system_pods.go:89] "snapshot-controller-745499f584-k94ns" [44b1b3d7-fc08-49f5-b63b-12636b910c6d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:58:41.297898 1070943 system_pods.go:89] "snapshot-controller-745499f584-t94gb" [8328aea4-b041-4226-848f-e23d62e056a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0528 20:58:41.297905 1070943 system_pods.go:89] "storage-provisioner" [99085ea8-e2fb-4522-9f8d-d4ccd01c0318] Running
	I0528 20:58:41.297913 1070943 system_pods.go:126] duration metric: took 205.335712ms to wait for k8s-apps to be running ...
	I0528 20:58:41.297924 1070943 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 20:58:41.297980 1070943 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 20:58:41.312027 1070943 system_svc.go:56] duration metric: took 14.093269ms WaitForService to wait for kubelet
	I0528 20:58:41.312063 1070943 kubeadm.go:576] duration metric: took 43.119978816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 20:58:41.312086 1070943 node_conditions.go:102] verifying NodePressure condition ...
	I0528 20:58:41.486925 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:41.489853 1070943 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0528 20:58:41.489886 1070943 node_conditions.go:123] node cpu capacity is 2
	I0528 20:58:41.489900 1070943 node_conditions.go:105] duration metric: took 177.808728ms to run NodePressure ...
	I0528 20:58:41.489913 1070943 start.go:240] waiting for startup goroutines ...
	I0528 20:58:41.565466 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:41.986299 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:42.061066 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:42.486199 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:42.560285 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:42.985631 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:43.060346 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:43.486182 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:43.561568 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:43.986688 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:44.061520 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:44.487664 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:44.560892 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:44.986197 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:45.063804 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:45.486519 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:45.561437 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:45.986340 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:46.061007 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:46.488308 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:46.566861 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:46.987103 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:47.061099 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:47.487637 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:47.595753 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:47.986675 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:48.061466 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:48.486369 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:48.560101 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:48.986490 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:49.062837 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:49.492437 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:49.561580 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:49.986420 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:50.063599 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:50.486363 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:50.560876 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:50.986980 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:51.061330 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:51.486364 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:51.561063 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:51.985614 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:52.064724 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:52.485883 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:52.560615 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:52.986652 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:53.063193 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:53.503196 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:53.561785 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:53.987930 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:54.061315 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:54.485994 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:54.565981 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:54.987066 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:55.060276 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:55.485958 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:55.561274 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:55.985786 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:56.060465 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:56.487035 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:56.561002 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:56.985363 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:57.060418 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:57.485866 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:57.560632 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:57.986088 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:58.060278 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:58.486243 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:58.560798 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:58.986795 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:59.060672 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:59.485724 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:58:59.561225 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:58:59.986800 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:59:00.062302 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:00.486705 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0528 20:59:00.560377 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:00.985504 1070943 kapi.go:107] duration metric: took 49.50874889s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0528 20:59:01.060771 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:01.560527 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:02.060429 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:02.560335 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:03.060724 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:03.560509 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:04.060923 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:04.560757 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:05.060889 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:05.560689 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:06.061060 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:06.560914 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:07.060464 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:07.559979 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:08.060518 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:08.560549 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:09.061274 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:09.560886 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:10.061743 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:10.560445 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:11.060607 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:11.560659 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:12.060737 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:12.561348 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:13.070095 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:13.561342 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:14.060641 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:14.561804 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:15.060573 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:15.560600 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:16.060287 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:16.561124 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:17.060163 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:17.561263 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:18.060917 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:18.561464 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:19.064995 1070943 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0528 20:59:19.560818 1070943 kapi.go:107] duration metric: took 1m8.504954765s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0528 20:59:35.498168 1070943 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0528 20:59:35.498193 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:35.998720 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:36.497852 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:36.998836 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:37.497552 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:37.998331 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:38.498412 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:38.997725 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:39.498352 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:39.997566 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:40.498367 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:40.998750 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:41.498711 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:41.997859 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:42.497723 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:42.998610 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:43.498459 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:43.997605 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:44.498542 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:44.998336 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:45.498571 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:45.998810 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:46.497910 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:46.998953 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:47.498613 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:47.998619 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:48.497518 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:48.998404 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:49.498044 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:49.997786 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:50.498586 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:50.998783 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:51.499127 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:51.999101 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:52.497973 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:52.997801 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:53.498188 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:53.998522 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:54.498743 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:54.998471 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:55.497736 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:55.998628 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:56.497813 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:56.997998 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:57.498091 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:57.997683 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:58.498070 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:58.997837 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:59.498479 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 20:59:59.997927 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:00.500415 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:00.999560 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:01.499025 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:01.998404 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:02.497561 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:02.998278 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:03.497577 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:03.998307 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:04.497803 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:04.998261 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:05.497647 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:05.998301 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:06.498478 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:06.998161 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:07.497613 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:07.998464 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:08.497904 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:08.998319 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:09.498306 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:09.998772 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:10.498242 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:10.998379 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:11.497891 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:11.998932 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:12.498186 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:12.997750 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:13.503998 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:13.998533 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:14.498367 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:14.998193 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:15.498567 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:15.999325 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:16.498376 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:17.000477 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:17.498682 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:17.998088 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:18.498159 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:18.997857 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:19.498767 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:19.998693 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:20.498300 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:20.997809 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:21.498097 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:21.998800 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:22.498881 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:22.998635 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:23.498495 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:23.998348 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:24.498270 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:24.998166 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:25.498364 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:25.997934 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:26.499055 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:26.998398 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:27.498109 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:27.997980 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:28.498613 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:28.998564 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:29.498817 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:29.998968 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:30.497848 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:30.998852 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:31.498127 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:31.997898 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:32.498154 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:32.998068 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:33.498268 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:33.997976 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:34.498762 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:34.998931 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:35.498155 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:35.997935 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:36.498082 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:36.997942 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:37.498593 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:37.999296 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:38.498305 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:39.001381 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:39.497872 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:39.998304 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:40.497833 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:40.998832 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:41.498667 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:41.999375 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:42.498100 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:42.998084 1070943 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0528 21:00:43.498266 1070943 kapi.go:107] duration metric: took 2m30.003792715s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0528 21:00:43.500311 1070943 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-885631 cluster.
	I0528 21:00:43.502086 1070943 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0528 21:00:43.503994 1070943 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0528 21:00:43.505778 1070943 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, default-storageclass, storage-provisioner, inspektor-gadget, volcano, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0528 21:00:43.507472 1070943 addons.go:510] duration metric: took 2m45.315063214s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner default-storageclass storage-provisioner inspektor-gadget volcano metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0528 21:00:43.507521 1070943 start.go:245] waiting for cluster config update ...
	I0528 21:00:43.507548 1070943 start.go:254] writing updated cluster config ...
	I0528 21:00:43.507846 1070943 ssh_runner.go:195] Run: rm -f paused
	I0528 21:00:43.832710 1070943 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:00:43.834806 1070943 out.go:177] * Done! kubectl is now configured to use "addons-885631" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 28 21:01:50 addons-885631 dockerd[1138]: time="2024-05-28T21:01:50.159793416Z" level=info msg="ignoring event" container=06901c3b7e779520d9e95a6f9f73c7cb69e45ef85b9a249fbc9b95a6615e9500 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:50 addons-885631 dockerd[1138]: time="2024-05-28T21:01:50.275026786Z" level=info msg="ignoring event" container=14743e903dc4852c7b7efb7d12af26d5623712d15e927570d8b0ef6f85dafd53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:50 addons-885631 dockerd[1138]: time="2024-05-28T21:01:50.530134194Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=1d76bc316dbe5c98110a7d8085be67e8a9cf95de18a5f3a82790269b514f50c4 spanID=696c7c068957ee48 traceID=0b20f24f5d040ea59661d768499e3a71
	May 28 21:01:50 addons-885631 dockerd[1138]: time="2024-05-28T21:01:50.580731288Z" level=info msg="ignoring event" container=1d76bc316dbe5c98110a7d8085be67e8a9cf95de18a5f3a82790269b514f50c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:50 addons-885631 dockerd[1138]: time="2024-05-28T21:01:50.698628635Z" level=info msg="ignoring event" container=506b12372f58cf8c4c42c25aaed099090918982c96a20f55b892b0c2c09c2836 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:51 addons-885631 dockerd[1138]: time="2024-05-28T21:01:51.388792536Z" level=info msg="ignoring event" container=6c28e27ca27bb2832ddc7a2329711da6c041a67d0528facb2a34cfe1b381c38e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:52 addons-885631 dockerd[1138]: time="2024-05-28T21:01:52.303205986Z" level=info msg="ignoring event" container=23ea09caa8d7a381be16bbb94f8e2e38c04b35b28b7d7da83239359e3a837a5d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:52 addons-885631 dockerd[1138]: time="2024-05-28T21:01:52.306408782Z" level=info msg="ignoring event" container=8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:52 addons-885631 dockerd[1138]: time="2024-05-28T21:01:52.326154264Z" level=info msg="ignoring event" container=8c2ad7dbb95dc8f9ffe26438358e142555cec55e86ac93b09756daa0af3d39a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:52 addons-885631 dockerd[1138]: time="2024-05-28T21:01:52.326204371Z" level=info msg="ignoring event" container=8e91e40285a323fd781932db69186ed6f07d662201af97f8c4a11b52b0f3cf1e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:52 addons-885631 dockerd[1138]: time="2024-05-28T21:01:52.326285411Z" level=info msg="ignoring event" container=a635bd8eb222e82b1d672357c1567e1f1339f98d9b774a556f33e0a53a2452f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:52 addons-885631 dockerd[1138]: time="2024-05-28T21:01:52.333229229Z" level=info msg="ignoring event" container=f33ef242166792858bb7e35af0e424e5293404f02d8624437e2b3f7064bccfe3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:52 addons-885631 dockerd[1138]: time="2024-05-28T21:01:52.335256684Z" level=info msg="ignoring event" container=f2c3f694d6dfb45baa3b34018947d15de37c9490bba3b22da5613afd08c6d23f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:52 addons-885631 dockerd[1138]: time="2024-05-28T21:01:52.439849184Z" level=info msg="ignoring event" container=dfd698472df076d49bd818d8e4808070f0b6fe7a6898ef70d06cfecd3518b345 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:52 addons-885631 dockerd[1138]: time="2024-05-28T21:01:52.680796888Z" level=info msg="ignoring event" container=811ded2cc80f413744f0618655a32e455aef529a7b21e7bd3738246655d4a738 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:52 addons-885631 dockerd[1138]: time="2024-05-28T21:01:52.833602162Z" level=info msg="ignoring event" container=787783beb96e95fb7ef4c1bcb2bb9713160f598f83476adfb137251e6ea0354b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:52 addons-885631 dockerd[1138]: time="2024-05-28T21:01:52.863012594Z" level=info msg="ignoring event" container=24f002b670fabd5d786865cb5c9421e2669d4cf76397b46c6651819b843698dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:01:55 addons-885631 cri-dockerd[1350]: time="2024-05-28T21:01:55Z" level=error msg="error getting RW layer size for container ID '8c2ad7dbb95dc8f9ffe26438358e142555cec55e86ac93b09756daa0af3d39a1': Error response from daemon: No such container: 8c2ad7dbb95dc8f9ffe26438358e142555cec55e86ac93b09756daa0af3d39a1"
	May 28 21:01:55 addons-885631 cri-dockerd[1350]: time="2024-05-28T21:01:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8c2ad7dbb95dc8f9ffe26438358e142555cec55e86ac93b09756daa0af3d39a1'"
	May 28 21:01:55 addons-885631 cri-dockerd[1350]: time="2024-05-28T21:01:55Z" level=error msg="error getting RW layer size for container ID 'dfd698472df076d49bd818d8e4808070f0b6fe7a6898ef70d06cfecd3518b345': Error response from daemon: No such container: dfd698472df076d49bd818d8e4808070f0b6fe7a6898ef70d06cfecd3518b345"
	May 28 21:01:55 addons-885631 cri-dockerd[1350]: time="2024-05-28T21:01:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'dfd698472df076d49bd818d8e4808070f0b6fe7a6898ef70d06cfecd3518b345'"
	May 28 21:01:55 addons-885631 cri-dockerd[1350]: time="2024-05-28T21:01:55Z" level=error msg="error getting RW layer size for container ID '8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2': Error response from daemon: No such container: 8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2"
	May 28 21:01:55 addons-885631 cri-dockerd[1350]: time="2024-05-28T21:01:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2'"
	May 28 21:01:55 addons-885631 cri-dockerd[1350]: time="2024-05-28T21:01:55Z" level=error msg="error getting RW layer size for container ID '8e91e40285a323fd781932db69186ed6f07d662201af97f8c4a11b52b0f3cf1e': Error response from daemon: No such container: 8e91e40285a323fd781932db69186ed6f07d662201af97f8c4a11b52b0f3cf1e"
	May 28 21:01:55 addons-885631 cri-dockerd[1350]: time="2024-05-28T21:01:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID '8e91e40285a323fd781932db69186ed6f07d662201af97f8c4a11b52b0f3cf1e'"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	6c28e27ca27bb       dd1b12fcb6097                                                                                                                4 seconds ago        Exited              hello-world-app              2                   226f06148bc09       hello-world-app-86c47465fc-4xtx4
	faf0858074460       nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00                                                33 seconds ago       Running             nginx                        0                   ac05f2c73bb5e       nginx
	e9e152050b6c7       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                        About a minute ago   Running             headlamp                     0                   93b6bac12b421       headlamp-68456f997b-stt7b
	9efc6d685f540       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 About a minute ago   Running             gcp-auth                     0                   5d0d1fff39cea       gcp-auth-5db96cd9b4-25j4q
	d79726f9a1160       ee30e7819d386                                                                                                                2 minutes ago        Running             admission                    0                   a0425e0b58821       volcano-admission-7b497cf95b-szz4g
	bd3d51f53a4e7       volcanosh/vc-scheduler@sha256:64d6efcf1a48366201aafcaf1bd4cb6d66246ec1c395ddb0deefe11350bcebba                               3 minutes ago        Running             volcano-scheduler            0                   3cfffb87c7632       volcano-scheduler-765f888978-9hfhq
	95cd18712331f       volcanosh/vc-controller-manager@sha256:1dd0973f67becc3336f009cce4eac8677d857aaf4ba766cfff371ad34dfc34cf                      3 minutes ago        Running             volcano-controller           0                   05c6cc64e417d       volcano-controller-86c5446455-r6rsm
	b2c10f10939cd       volcanosh/vc-webhook-manager@sha256:082b6a3b7b8b69d98541a8ea56958ef427fdba54ea555870799f8c9ec2754c1b                         3 minutes ago        Exited              main                         0                   79531b539c714       volcano-admission-init-nr82g
	7a4dc398c777a       296b5f799fcd8                                                                                                                3 minutes ago        Exited              patch                        1                   80435ad554474       ingress-nginx-admission-patch-j5kgx
	ce54a72c6d8b7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   3 minutes ago        Exited              create                       0                   14a21610af18a       ingress-nginx-admission-create-b2ltv
	a940fb94645fb       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      3 minutes ago        Running             volume-snapshot-controller   0                   159796ff98d78       snapshot-controller-745499f584-k94ns
	5f5093bc0cf62       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      3 minutes ago        Running             volume-snapshot-controller   0                   d9660fce2bae1       snapshot-controller-745499f584-t94gb
	b106dda063763       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        3 minutes ago        Running             yakd                         0                   f83f07df8098f       yakd-dashboard-5ddbf7d777-pdng2
	85d97e232eb82       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       3 minutes ago        Running             local-path-provisioner       0                   f845317d5b5cc       local-path-provisioner-8d985888d-m8k7b
	78600e9fa1f53       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4               3 minutes ago        Running             cloud-spanner-emulator       0                   10f8573259df9       cloud-spanner-emulator-6fcd4f6f98-lnsv2
	e0ae74639190a       nvcr.io/nvidia/k8s-device-plugin@sha256:1aff0e9f0759758f87cb158d78241472af3a76cdc631f01ab395f997fa80f707                     3 minutes ago        Running             nvidia-device-plugin-ctr     0                   1f3e356abbc75       nvidia-device-plugin-daemonset-7cvnl
	b2552639da890       ba04bb24b9575                                                                                                                3 minutes ago        Running             storage-provisioner          0                   c5daf119f7bf3       storage-provisioner
	e981edbcae28e       2437cf7621777                                                                                                                3 minutes ago        Running             coredns                      0                   808d102041c41       coredns-7db6d8ff4d-km5m8
	2bbc9e319693a       05eccb821e159                                                                                                                3 minutes ago        Running             kube-proxy                   0                   2cbfea440b4be       kube-proxy-94hxg
	23411c4417f50       163ff818d154d                                                                                                                4 minutes ago        Running             kube-scheduler               0                   5ffe71c62365b       kube-scheduler-addons-885631
	6266a265a8859       988b55d423baf                                                                                                                4 minutes ago        Running             kube-apiserver               0                   e35ab1720dd17       kube-apiserver-addons-885631
	f62d90c1d3096       014faa467e297                                                                                                                4 minutes ago        Running             etcd                         0                   5b6af8f45e400       etcd-addons-885631
	c928f43013dd5       234ac56e455be                                                                                                                4 minutes ago        Running             kube-controller-manager      0                   de757acd693f9       kube-controller-manager-addons-885631
	
	
	==> coredns [e981edbcae28] <==
	[INFO] 10.244.0.21:60106 - 54801 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048204s
	[INFO] 10.244.0.21:35989 - 54452 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002210384s
	[INFO] 10.244.0.21:60106 - 39310 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001348136s
	[INFO] 10.244.0.21:35989 - 54105 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002731611s
	[INFO] 10.244.0.21:60106 - 1469 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001954235s
	[INFO] 10.244.0.21:35989 - 2874 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000103621s
	[INFO] 10.244.0.21:60106 - 49468 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000072835s
	[INFO] 10.244.0.21:49716 - 17150 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000088039s
	[INFO] 10.244.0.21:49716 - 45092 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000084962s
	[INFO] 10.244.0.21:59353 - 21638 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000058518s
	[INFO] 10.244.0.21:59353 - 32301 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000069545s
	[INFO] 10.244.0.21:49716 - 37473 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000078045s
	[INFO] 10.244.0.21:59353 - 65187 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000087309s
	[INFO] 10.244.0.21:49716 - 28607 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050214s
	[INFO] 10.244.0.21:59353 - 34557 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060503s
	[INFO] 10.244.0.21:49716 - 14169 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051642s
	[INFO] 10.244.0.21:49716 - 62113 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051125s
	[INFO] 10.244.0.21:59353 - 38806 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037226s
	[INFO] 10.244.0.21:59353 - 59264 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046982s
	[INFO] 10.244.0.21:49716 - 1175 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001727138s
	[INFO] 10.244.0.21:59353 - 48416 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001158134s
	[INFO] 10.244.0.21:49716 - 45993 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001126069s
	[INFO] 10.244.0.21:59353 - 57291 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001204181s
	[INFO] 10.244.0.21:49716 - 49591 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000086718s
	[INFO] 10.244.0.21:59353 - 56727 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000133297s
	
	
	==> describe nodes <==
	Name:               addons-885631
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-885631
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=addons-885631
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T20_57_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-885631
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 20:57:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-885631
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:01:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:01:49 +0000   Tue, 28 May 2024 20:57:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:01:49 +0000   Tue, 28 May 2024 20:57:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:01:49 +0000   Tue, 28 May 2024 20:57:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:01:49 +0000   Tue, 28 May 2024 20:57:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-885631
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	System Info:
	  Machine ID:                 d6e63a647f384674bd67e8da620ce30f
	  System UUID:                68666082-b25b-4abc-b6db-667b1d195955
	  Boot ID:                    869fd7c8-60a7-4ae5-b10f-ba225f4e7da7
	  Kernel Version:             5.15.0-1062-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-lnsv2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  default                     hello-world-app-86c47465fc-4xtx4           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-5db96cd9b4-25j4q                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  headlamp                    headlamp-68456f997b-stt7b                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 coredns-7db6d8ff4d-km5m8                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m57s
	  kube-system                 etcd-addons-885631                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m12s
	  kube-system                 kube-apiserver-addons-885631               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-controller-manager-addons-885631      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 kube-proxy-94hxg                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-scheduler-addons-885631               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  kube-system                 nvidia-device-plugin-daemonset-7cvnl       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 snapshot-controller-745499f584-k94ns       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 snapshot-controller-745499f584-t94gb       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  local-path-storage          local-path-provisioner-8d985888d-m8k7b     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  volcano-system              volcano-admission-7b497cf95b-szz4g         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  volcano-system              volcano-controller-86c5446455-r6rsm        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  volcano-system              volcano-scheduler-765f888978-9hfhq         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m45s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-pdng2            0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     3m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (3%!)(MISSING)  426Mi (5%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m19s (x8 over 4m19s)  kubelet          Node addons-885631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x8 over 4m19s)  kubelet          Node addons-885631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s (x7 over 4m19s)  kubelet          Node addons-885631 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m11s                  kubelet          Node addons-885631 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s                  kubelet          Node addons-885631 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s                  kubelet          Node addons-885631 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4m11s                  kubelet          Node addons-885631 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m11s                  kubelet          Node addons-885631 status is now: NodeReady
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           3m58s                  node-controller  Node addons-885631 event: Registered Node addons-885631 in Controller
	
	
	==> dmesg <==
	[  +0.000694] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000939] FS-Cache: N-cookie d=000000001a3d9142{9p.inode} n=00000000920e9926
	[  +0.001060] FS-Cache: N-key=[8] '1871ed0000000000'
	[  +0.003693] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001003] FS-Cache: O-cookie d=000000001a3d9142{9p.inode} n=0000000086b8c112
	[  +0.001090] FS-Cache: O-key=[8] '1871ed0000000000'
	[  +0.000789] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001035] FS-Cache: N-cookie d=000000001a3d9142{9p.inode} n=000000007762c4e2
	[  +0.001070] FS-Cache: N-key=[8] '1871ed0000000000'
	[  +2.335946] FS-Cache: Duplicate cookie detected
	[  +0.000740] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=000000001a3d9142{9p.inode} n=000000005818c61c
	[  +0.001079] FS-Cache: O-key=[8] '1771ed0000000000'
	[  +0.000813] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000990] FS-Cache: N-cookie d=000000001a3d9142{9p.inode} n=00000000122b7051
	[  +0.001090] FS-Cache: N-key=[8] '1771ed0000000000'
	[  +0.351994] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001058] FS-Cache: O-cookie d=000000001a3d9142{9p.inode} n=0000000006f007f4
	[  +0.001125] FS-Cache: O-key=[8] '1d71ed0000000000'
	[  +0.000753] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000966] FS-Cache: N-cookie d=000000001a3d9142{9p.inode} n=00000000920e9926
	[  +0.001088] FS-Cache: N-key=[8] '1d71ed0000000000'
	[May28 20:29] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [f62d90c1d309] <==
	{"level":"info","ts":"2024-05-28T20:57:37.722455Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-05-28T20:57:37.723424Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-05-28T20:57:37.747977Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-28T20:57:37.748174Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T20:57:37.748198Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T20:57:37.748292Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-28T20:57:37.748307Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-28T20:57:38.468139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-28T20:57:38.468421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-28T20:57:38.468529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-28T20:57:38.468664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-28T20:57:38.468746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-28T20:57:38.468838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-28T20:57:38.46893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-28T20:57:38.474207Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T20:57:38.474604Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T20:57:38.476522Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T20:57:38.476962Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T20:57:38.482094Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T20:57:38.482282Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T20:57:38.476985Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T20:57:38.477025Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-885631 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T20:57:38.484379Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-28T20:57:38.495183Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T20:57:38.498051Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [9efc6d685f54] <==
	2024/05/28 21:00:42 GCP Auth Webhook started!
	2024/05/28 21:00:44 Ready to marshal response ...
	2024/05/28 21:00:44 Ready to write response ...
	2024/05/28 21:00:44 Ready to marshal response ...
	2024/05/28 21:00:44 Ready to write response ...
	2024/05/28 21:00:44 Ready to marshal response ...
	2024/05/28 21:00:44 Ready to write response ...
	2024/05/28 21:00:55 Ready to marshal response ...
	2024/05/28 21:00:55 Ready to write response ...
	2024/05/28 21:01:10 Ready to marshal response ...
	2024/05/28 21:01:10 Ready to write response ...
	2024/05/28 21:01:19 Ready to marshal response ...
	2024/05/28 21:01:19 Ready to write response ...
	2024/05/28 21:01:30 Ready to marshal response ...
	2024/05/28 21:01:30 Ready to write response ...
	2024/05/28 21:01:41 Ready to marshal response ...
	2024/05/28 21:01:41 Ready to write response ...
	
	
	==> kernel <==
	 21:01:56 up  4:44,  0 users,  load average: 1.34, 2.21, 2.50
	Linux addons-885631 5.15.0-1062-aws #68~20.04.1-Ubuntu SMP Tue May 7 11:50:35 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [6266a265a885] <==
	W0528 20:59:16.264459       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.153.12:443: connect: connection refused
	W0528 20:59:16.385374       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.204.75:443: connect: connection refused
	E0528 20:59:16.385409       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.204.75:443: connect: connection refused
	W0528 20:59:16.385902       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.153.12:443: connect: connection refused
	W0528 20:59:16.418492       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.204.75:443: connect: connection refused
	E0528 20:59:16.418526       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.204.75:443: connect: connection refused
	W0528 20:59:16.418920       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.107.153.12:443: connect: connection refused
	W0528 20:59:17.342561       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.153.12:443: connect: connection refused
	W0528 20:59:18.366416       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.153.12:443: connect: connection refused
	W0528 20:59:19.455389       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.153.12:443: connect: connection refused
	W0528 20:59:20.529692       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.107.153.12:443: connect: connection refused
	W0528 20:59:35.321160       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.204.75:443: connect: connection refused
	E0528 20:59:35.321201       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.204.75:443: connect: connection refused
	W0528 21:00:16.392182       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.204.75:443: connect: connection refused
	E0528 21:00:16.392221       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.204.75:443: connect: connection refused
	W0528 21:00:16.424760       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.204.75:443: connect: connection refused
	E0528 21:00:16.424805       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.204.75:443: connect: connection refused
	I0528 21:00:44.761837       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.145.243"}
	I0528 21:01:14.124105       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0528 21:01:15.155572       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0528 21:01:19.777005       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0528 21:01:20.104542       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.86.70"}
	I0528 21:01:21.426874       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0528 21:01:30.856889       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.250.198"}
	I0528 21:01:40.784299       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [c928f43013dd] <==
	W0528 21:01:18.271549       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:01:18.271589       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0528 21:01:23.940748       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:01:23.940789       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 21:01:24.332108       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0528 21:01:28.141199       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0528 21:01:28.141243       1 shared_informer.go:320] Caches are synced for resource quota
	I0528 21:01:28.566553       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0528 21:01:28.566605       1 shared_informer.go:320] Caches are synced for garbage collector
	I0528 21:01:30.662415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="64.131033ms"
	W0528 21:01:30.665077       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:01:30.665245       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0528 21:01:30.696863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="34.398121ms"
	I0528 21:01:30.696930       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="31.638µs"
	I0528 21:01:35.627228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="35.453µs"
	I0528 21:01:36.659798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="43.027µs"
	I0528 21:01:37.684990       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="41.033µs"
	I0528 21:01:47.488688       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0528 21:01:47.493315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="4.529µs"
	I0528 21:01:47.501792       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0528 21:01:52.050709       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="61.619µs"
	I0528 21:01:52.064364       1 stateful_set.go:458] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0528 21:01:52.163496       1 stateful_set.go:458] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	W0528 21:01:54.244554       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0528 21:01:54.244595       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [2bbc9e319693] <==
	I0528 20:58:00.872258       1 server_linux.go:69] "Using iptables proxy"
	I0528 20:58:00.896167       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0528 20:58:00.936275       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0528 20:58:00.936326       1 server_linux.go:165] "Using iptables Proxier"
	I0528 20:58:00.938002       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0528 20:58:00.938038       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0528 20:58:00.938061       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 20:58:00.938411       1 server.go:872] "Version info" version="v1.30.1"
	I0528 20:58:00.938428       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 20:58:00.939375       1 config.go:192] "Starting service config controller"
	I0528 20:58:00.939395       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 20:58:00.939417       1 config.go:101] "Starting endpoint slice config controller"
	I0528 20:58:00.939421       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 20:58:00.940041       1 config.go:319] "Starting node config controller"
	I0528 20:58:00.940050       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 20:58:01.046988       1 shared_informer.go:320] Caches are synced for service config
	I0528 20:58:01.047042       1 shared_informer.go:320] Caches are synced for node config
	I0528 20:58:01.047070       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [23411c4417f5] <==
	W0528 20:57:42.189488       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0528 20:57:42.189523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0528 20:57:42.189573       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 20:57:42.189597       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0528 20:57:42.189694       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 20:57:42.189714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0528 20:57:42.189808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 20:57:42.189838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0528 20:57:42.189888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 20:57:42.189927       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0528 20:57:42.190137       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0528 20:57:42.190159       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0528 20:57:42.190224       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 20:57:42.190241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0528 20:57:42.190315       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 20:57:42.190339       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0528 20:57:42.190402       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 20:57:42.190416       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0528 20:57:42.190499       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0528 20:57:42.190536       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0528 20:57:42.190590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 20:57:42.190621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0528 20:57:42.190661       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0528 20:57:42.190689       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0528 20:57:43.681343       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.318536    2180 scope.go:117] "RemoveContainer" containerID="8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.319261    2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2"} err="failed to get container status \"8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.319309    2180 scope.go:117] "RemoveContainer" containerID="f33ef242166792858bb7e35af0e424e5293404f02d8624437e2b3f7064bccfe3"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.320032    2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"f33ef242166792858bb7e35af0e424e5293404f02d8624437e2b3f7064bccfe3"} err="failed to get container status \"f33ef242166792858bb7e35af0e424e5293404f02d8624437e2b3f7064bccfe3\": rpc error: code = Unknown desc = Error response from daemon: No such container: f33ef242166792858bb7e35af0e424e5293404f02d8624437e2b3f7064bccfe3"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.320071    2180 scope.go:117] "RemoveContainer" containerID="a635bd8eb222e82b1d672357c1567e1f1339f98d9b774a556f33e0a53a2452f2"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.320774    2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a635bd8eb222e82b1d672357c1567e1f1339f98d9b774a556f33e0a53a2452f2"} err="failed to get container status \"a635bd8eb222e82b1d672357c1567e1f1339f98d9b774a556f33e0a53a2452f2\": rpc error: code = Unknown desc = Error response from daemon: No such container: a635bd8eb222e82b1d672357c1567e1f1339f98d9b774a556f33e0a53a2452f2"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.320827    2180 scope.go:117] "RemoveContainer" containerID="23ea09caa8d7a381be16bbb94f8e2e38c04b35b28b7d7da83239359e3a837a5d"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.321575    2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"23ea09caa8d7a381be16bbb94f8e2e38c04b35b28b7d7da83239359e3a837a5d"} err="failed to get container status \"23ea09caa8d7a381be16bbb94f8e2e38c04b35b28b7d7da83239359e3a837a5d\": rpc error: code = Unknown desc = Error response from daemon: No such container: 23ea09caa8d7a381be16bbb94f8e2e38c04b35b28b7d7da83239359e3a837a5d"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.321624    2180 scope.go:117] "RemoveContainer" containerID="f2c3f694d6dfb45baa3b34018947d15de37c9490bba3b22da5613afd08c6d23f"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.322527    2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"f2c3f694d6dfb45baa3b34018947d15de37c9490bba3b22da5613afd08c6d23f"} err="failed to get container status \"f2c3f694d6dfb45baa3b34018947d15de37c9490bba3b22da5613afd08c6d23f\": rpc error: code = Unknown desc = Error response from daemon: No such container: f2c3f694d6dfb45baa3b34018947d15de37c9490bba3b22da5613afd08c6d23f"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.322568    2180 scope.go:117] "RemoveContainer" containerID="8c2ad7dbb95dc8f9ffe26438358e142555cec55e86ac93b09756daa0af3d39a1"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.323396    2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8c2ad7dbb95dc8f9ffe26438358e142555cec55e86ac93b09756daa0af3d39a1"} err="failed to get container status \"8c2ad7dbb95dc8f9ffe26438358e142555cec55e86ac93b09756daa0af3d39a1\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8c2ad7dbb95dc8f9ffe26438358e142555cec55e86ac93b09756daa0af3d39a1"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.323449    2180 scope.go:117] "RemoveContainer" containerID="8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.324183    2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2"} err="failed to get container status \"8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8d213553c3b55973fac4cbe6f15df0779fc4256e091ac8706415d998dffbc7b2"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.324242    2180 scope.go:117] "RemoveContainer" containerID="8e91e40285a323fd781932db69186ed6f07d662201af97f8c4a11b52b0f3cf1e"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.342447    2180 scope.go:117] "RemoveContainer" containerID="8e91e40285a323fd781932db69186ed6f07d662201af97f8c4a11b52b0f3cf1e"
	May 28 21:01:53 addons-885631 kubelet[2180]: E0528 21:01:53.343396    2180 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8e91e40285a323fd781932db69186ed6f07d662201af97f8c4a11b52b0f3cf1e" containerID="8e91e40285a323fd781932db69186ed6f07d662201af97f8c4a11b52b0f3cf1e"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.343440    2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8e91e40285a323fd781932db69186ed6f07d662201af97f8c4a11b52b0f3cf1e"} err="failed to get container status \"8e91e40285a323fd781932db69186ed6f07d662201af97f8c4a11b52b0f3cf1e\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8e91e40285a323fd781932db69186ed6f07d662201af97f8c4a11b52b0f3cf1e"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.343467    2180 scope.go:117] "RemoveContainer" containerID="dfd698472df076d49bd818d8e4808070f0b6fe7a6898ef70d06cfecd3518b345"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.360655    2180 scope.go:117] "RemoveContainer" containerID="dfd698472df076d49bd818d8e4808070f0b6fe7a6898ef70d06cfecd3518b345"
	May 28 21:01:53 addons-885631 kubelet[2180]: E0528 21:01:53.361816    2180 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: dfd698472df076d49bd818d8e4808070f0b6fe7a6898ef70d06cfecd3518b345" containerID="dfd698472df076d49bd818d8e4808070f0b6fe7a6898ef70d06cfecd3518b345"
	May 28 21:01:53 addons-885631 kubelet[2180]: I0528 21:01:53.361861    2180 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"dfd698472df076d49bd818d8e4808070f0b6fe7a6898ef70d06cfecd3518b345"} err="failed to get container status \"dfd698472df076d49bd818d8e4808070f0b6fe7a6898ef70d06cfecd3518b345\": rpc error: code = Unknown desc = Error response from daemon: No such container: dfd698472df076d49bd818d8e4808070f0b6fe7a6898ef70d06cfecd3518b345"
	May 28 21:01:54 addons-885631 kubelet[2180]: I0528 21:01:54.187643    2180 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a699f1a-635d-41fb-a88b-03e872226fab" path="/var/lib/kubelet/pods/2a699f1a-635d-41fb-a88b-03e872226fab/volumes"
	May 28 21:01:54 addons-885631 kubelet[2180]: I0528 21:01:54.188465    2180 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2baf58cf-aeaf-43f8-ab9e-2579aacfdb59" path="/var/lib/kubelet/pods/2baf58cf-aeaf-43f8-ab9e-2579aacfdb59/volumes"
	May 28 21:01:54 addons-885631 kubelet[2180]: I0528 21:01:54.188892    2180 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48b99766-3c42-4918-af72-5d75258528bb" path="/var/lib/kubelet/pods/48b99766-3c42-4918-af72-5d75258528bb/volumes"
	
	
	==> storage-provisioner [b2552639da89] <==
	I0528 20:58:04.985644       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 20:58:05.001932       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 20:58:05.001989       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 20:58:05.019867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 20:58:05.022435       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-885631_5a9eb6d5-fd54-4253-9b8f-d4d52dd9ce39!
	I0528 20:58:05.034695       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ebeeaad8-6108-482e-b34a-cb1e83f2346b", APIVersion:"v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-885631_5a9eb6d5-fd54-4253-9b8f-d4d52dd9ce39 became leader
	I0528 20:58:05.122920       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-885631_5a9eb6d5-fd54-4253-9b8f-d4d52dd9ce39!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-885631 -n addons-885631
helpers_test.go:261: (dbg) Run:  kubectl --context addons-885631 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: volcano-admission-init-nr82g
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-885631 describe pod volcano-admission-init-nr82g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-885631 describe pod volcano-admission-init-nr82g: exit status 1 (96.396354ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "volcano-admission-init-nr82g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-885631 describe pod volcano-admission-init-nr82g: exit status 1
--- FAIL: TestAddons/parallel/Ingress (37.54s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-409073 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2024-05-28 21:08:20 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0x400159d698 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:registry.k8s.io/kube-apiserver:v1.30.1 ImageID:docker-pullable://registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea ContainerID:docker://60158aefc077c035772373421ede638c2392924c11ee9e4b834a81cce97b1486}]}
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.49.2 PodIP:192.168.49.2 StartTime:2024-05-28 21:06:47 +0000 UTC ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0x400159d6f8 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0x40005040e0} Ready:true RestartCount:3 Image:registry.k8s.io/kube-controller-manager:v1.30.1 ImageID:docker-pullable://registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52 ContainerID:docker://b8026f72530b38ff72976118db8aa89b23a1d56f81d7d5f6cb9c54a53b57d6be}]}
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-409073
helpers_test.go:235: (dbg) docker inspect functional-409073:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a3062ebf60f47ce68fb3a953952ca7cd3a3d3a3df83c654fec377eb37cf20e6c",
	        "Created": "2024-05-28T21:04:25.914906717Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1094327,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-28T21:04:26.238412831Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:acea75078737755d2f999491dfa245ea1d1040bffc73283b8c9ba9ff1fde89b5",
	        "ResolvConfPath": "/var/lib/docker/containers/a3062ebf60f47ce68fb3a953952ca7cd3a3d3a3df83c654fec377eb37cf20e6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a3062ebf60f47ce68fb3a953952ca7cd3a3d3a3df83c654fec377eb37cf20e6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/a3062ebf60f47ce68fb3a953952ca7cd3a3d3a3df83c654fec377eb37cf20e6c/hosts",
	        "LogPath": "/var/lib/docker/containers/a3062ebf60f47ce68fb3a953952ca7cd3a3d3a3df83c654fec377eb37cf20e6c/a3062ebf60f47ce68fb3a953952ca7cd3a3d3a3df83c654fec377eb37cf20e6c-json.log",
	        "Name": "/functional-409073",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409073:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409073",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/10f41f4a87327bbf229b45f4a7299a7ab0c2babca35176a2e1bcc21d5eeb60ce-init/diff:/var/lib/docker/overlay2/8e655f7297a0818a5a7e390e8907c6f4d26023cd8c9930299bc7c4352e4766d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/10f41f4a87327bbf229b45f4a7299a7ab0c2babca35176a2e1bcc21d5eeb60ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/10f41f4a87327bbf229b45f4a7299a7ab0c2babca35176a2e1bcc21d5eeb60ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/10f41f4a87327bbf229b45f4a7299a7ab0c2babca35176a2e1bcc21d5eeb60ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-409073",
	                "Source": "/var/lib/docker/volumes/functional-409073/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409073",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409073",
	                "name.minikube.sigs.k8s.io": "functional-409073",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5fb96d2d2e3d1a76bdf839b8f20a16aa7e3c90a11309fcf482628d3362c418bf",
	            "SandboxKey": "/var/run/docker/netns/5fb96d2d2e3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33934"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33936"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33935"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409073": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "17297d1493cd41a55549a3dd8bca5a42409e30bc8fbe2c89cd21a4f71faa1aec",
	                    "EndpointID": "19dae3e9679a20953f6ae851aa8cb8c6817f7e19ec8a75ff14c26cbb7964e597",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "functional-409073",
	                        "a3062ebf60f4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-409073 -n functional-409073
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-409073 logs -n 25: (1.117282845s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-561363 --log_dir                                                  | nospam-561363     | jenkins | v1.33.1 | 28 May 24 21:04 UTC | 28 May 24 21:04 UTC |
	|         | /tmp/nospam-561363 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-561363 --log_dir                                                  | nospam-561363     | jenkins | v1.33.1 | 28 May 24 21:04 UTC | 28 May 24 21:04 UTC |
	|         | /tmp/nospam-561363 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-561363 --log_dir                                                  | nospam-561363     | jenkins | v1.33.1 | 28 May 24 21:04 UTC | 28 May 24 21:04 UTC |
	|         | /tmp/nospam-561363 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-561363 --log_dir                                                  | nospam-561363     | jenkins | v1.33.1 | 28 May 24 21:04 UTC | 28 May 24 21:04 UTC |
	|         | /tmp/nospam-561363 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-561363 --log_dir                                                  | nospam-561363     | jenkins | v1.33.1 | 28 May 24 21:04 UTC | 28 May 24 21:04 UTC |
	|         | /tmp/nospam-561363 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-561363 --log_dir                                                  | nospam-561363     | jenkins | v1.33.1 | 28 May 24 21:04 UTC | 28 May 24 21:04 UTC |
	|         | /tmp/nospam-561363 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-561363                                                         | nospam-561363     | jenkins | v1.33.1 | 28 May 24 21:04 UTC | 28 May 24 21:04 UTC |
	| start   | -p functional-409073                                                     | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:04 UTC | 28 May 24 21:05 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                   |         |         |                     |                     |
	| start   | -p functional-409073                                                     | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:05 UTC | 28 May 24 21:06 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-409073 cache add                                              | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-409073 cache add                                              | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-409073 cache add                                              | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-409073 cache add                                              | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | minikube-local-cache-test:functional-409073                              |                   |         |         |                     |                     |
	| cache   | functional-409073 cache delete                                           | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | minikube-local-cache-test:functional-409073                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	| ssh     | functional-409073 ssh sudo                                               | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-409073                                                        | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | ssh sudo docker rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-409073 ssh                                                    | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-409073 cache reload                                           | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	| ssh     | functional-409073 ssh                                                    | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-409073 kubectl --                                             | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:06 UTC |
	|         | --context functional-409073                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-409073                                                     | functional-409073 | jenkins | v1.33.1 | 28 May 24 21:06 UTC | 28 May 24 21:08 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 21:06:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 21:06:25.929656 1100921 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:06:25.929767 1100921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:06:25.929771 1100921 out.go:304] Setting ErrFile to fd 2...
	I0528 21:06:25.929775 1100921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:06:25.930053 1100921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	I0528 21:06:25.930476 1100921 out.go:298] Setting JSON to false
	I0528 21:06:25.931513 1100921 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17335,"bootTime":1716913051,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0528 21:06:25.931569 1100921 start.go:139] virtualization:  
	I0528 21:06:25.933983 1100921 out.go:177] * [functional-409073] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 21:06:25.936706 1100921 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:06:25.938632 1100921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:06:25.936835 1100921 notify.go:220] Checking for updates...
	I0528 21:06:25.942293 1100921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 21:06:25.944287 1100921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	I0528 21:06:25.946189 1100921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 21:06:25.947778 1100921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:06:25.950461 1100921 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 21:06:25.950583 1100921 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:06:25.971231 1100921 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 21:06:25.971344 1100921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:06:26.043730 1100921 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:72 SystemTime:2024-05-28 21:06:26.034543837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:06:26.043838 1100921 docker.go:295] overlay module found
	I0528 21:06:26.045716 1100921 out.go:177] * Using the docker driver based on existing profile
	I0528 21:06:26.047571 1100921 start.go:297] selected driver: docker
	I0528 21:06:26.047581 1100921 start.go:901] validating driver "docker" against &{Name:functional-409073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-409073 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:06:26.047680 1100921 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:06:26.047783 1100921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:06:26.122600 1100921 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:71 SystemTime:2024-05-28 21:06:26.112002349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:06:26.122980 1100921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:06:26.123001 1100921 cni.go:84] Creating CNI manager for ""
	I0528 21:06:26.123012 1100921 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 21:06:26.123056 1100921 start.go:340] cluster config:
	{Name:functional-409073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-409073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:06:26.125943 1100921 out.go:177] * Starting "functional-409073" primary control-plane node in "functional-409073" cluster
	I0528 21:06:26.128136 1100921 cache.go:121] Beginning downloading kic base image for docker with docker
	I0528 21:06:26.130662 1100921 out.go:177] * Pulling base image v0.0.44-1716228441-18934 ...
	I0528 21:06:26.132563 1100921 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 21:06:26.132608 1100921 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0528 21:06:26.132615 1100921 cache.go:56] Caching tarball of preloaded images
	I0528 21:06:26.132614 1100921 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 21:06:26.132783 1100921 preload.go:173] Found /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0528 21:06:26.132790 1100921 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0528 21:06:26.132958 1100921 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/config.json ...
	I0528 21:06:26.148349 1100921 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon, skipping pull
	I0528 21:06:26.148364 1100921 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 exists in daemon, skipping load
	I0528 21:06:26.148391 1100921 cache.go:194] Successfully downloaded all kic artifacts
	I0528 21:06:26.148419 1100921 start.go:360] acquireMachinesLock for functional-409073: {Name:mkb72896a13160c0b896df48cc06ef12e15c2373 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 21:06:26.148492 1100921 start.go:364] duration metric: took 53.784µs to acquireMachinesLock for "functional-409073"
	I0528 21:06:26.148513 1100921 start.go:96] Skipping create...Using existing machine configuration
	I0528 21:06:26.148525 1100921 fix.go:54] fixHost starting: 
	I0528 21:06:26.148816 1100921 cli_runner.go:164] Run: docker container inspect functional-409073 --format={{.State.Status}}
	I0528 21:06:26.166849 1100921 fix.go:112] recreateIfNeeded on functional-409073: state=Running err=<nil>
	W0528 21:06:26.166881 1100921 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 21:06:26.169094 1100921 out.go:177] * Updating the running docker "functional-409073" container ...
	I0528 21:06:26.171032 1100921 machine.go:94] provisionDockerMachine start ...
	I0528 21:06:26.171231 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:06:26.187280 1100921 main.go:141] libmachine: Using SSH client type: native
	I0528 21:06:26.187587 1100921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0528 21:06:26.187594 1100921 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 21:06:26.309326 1100921 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-409073
	
	I0528 21:06:26.309340 1100921 ubuntu.go:169] provisioning hostname "functional-409073"
	I0528 21:06:26.309412 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:06:26.326981 1100921 main.go:141] libmachine: Using SSH client type: native
	I0528 21:06:26.327236 1100921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0528 21:06:26.327250 1100921 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-409073 && echo "functional-409073" | sudo tee /etc/hostname
	I0528 21:06:26.461827 1100921 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-409073
	
	I0528 21:06:26.461892 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:06:26.479057 1100921 main.go:141] libmachine: Using SSH client type: native
	I0528 21:06:26.479315 1100921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0528 21:06:26.479328 1100921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409073' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409073/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409073' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 21:06:26.602166 1100921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:06:26.602181 1100921 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18966-1064873/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-1064873/.minikube}
	I0528 21:06:26.602206 1100921 ubuntu.go:177] setting up certificates
	I0528 21:06:26.602222 1100921 provision.go:84] configureAuth start
	I0528 21:06:26.602277 1100921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409073
	I0528 21:06:26.619209 1100921 provision.go:143] copyHostCerts
	I0528 21:06:26.619281 1100921 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.pem, removing ...
	I0528 21:06:26.619289 1100921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.pem
	I0528 21:06:26.619381 1100921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.pem (1078 bytes)
	I0528 21:06:26.619476 1100921 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1064873/.minikube/cert.pem, removing ...
	I0528 21:06:26.619481 1100921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1064873/.minikube/cert.pem
	I0528 21:06:26.619507 1100921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/cert.pem (1123 bytes)
	I0528 21:06:26.619558 1100921 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1064873/.minikube/key.pem, removing ...
	I0528 21:06:26.619561 1100921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1064873/.minikube/key.pem
	I0528 21:06:26.619583 1100921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/key.pem (1679 bytes)
	I0528 21:06:26.619627 1100921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem org=jenkins.functional-409073 san=[127.0.0.1 192.168.49.2 functional-409073 localhost minikube]
	I0528 21:06:27.512145 1100921 provision.go:177] copyRemoteCerts
	I0528 21:06:27.512206 1100921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 21:06:27.512257 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:06:27.533275 1100921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
	I0528 21:06:27.631411 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 21:06:27.657411 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0528 21:06:27.685641 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 21:06:27.710685 1100921 provision.go:87] duration metric: took 1.108450686s to configureAuth
	I0528 21:06:27.710703 1100921 ubuntu.go:193] setting minikube options for container-runtime
	I0528 21:06:27.710909 1100921 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 21:06:27.710964 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:06:27.726846 1100921 main.go:141] libmachine: Using SSH client type: native
	I0528 21:06:27.727089 1100921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0528 21:06:27.727097 1100921 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 21:06:27.854869 1100921 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0528 21:06:27.854881 1100921 ubuntu.go:71] root file system type: overlay
	I0528 21:06:27.854999 1100921 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 21:06:27.855064 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:06:27.870729 1100921 main.go:141] libmachine: Using SSH client type: native
	I0528 21:06:27.870963 1100921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0528 21:06:27.871036 1100921 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 21:06:28.026334 1100921 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 21:06:28.026417 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:06:28.043948 1100921 main.go:141] libmachine: Using SSH client type: native
	I0528 21:06:28.044203 1100921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0528 21:06:28.044225 1100921 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 21:06:28.171530 1100921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 21:06:28.171567 1100921 machine.go:97] duration metric: took 2.000524913s to provisionDockerMachine
	I0528 21:06:28.171577 1100921 start.go:293] postStartSetup for "functional-409073" (driver="docker")
	I0528 21:06:28.171590 1100921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 21:06:28.171669 1100921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 21:06:28.171720 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:06:28.187851 1100921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
	I0528 21:06:28.283005 1100921 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 21:06:28.286203 1100921 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0528 21:06:28.286241 1100921 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0528 21:06:28.286250 1100921 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0528 21:06:28.286256 1100921 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0528 21:06:28.286274 1100921 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1064873/.minikube/addons for local assets ...
	I0528 21:06:28.286357 1100921 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1064873/.minikube/files for local assets ...
	I0528 21:06:28.286455 1100921 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem -> 10703092.pem in /etc/ssl/certs
	I0528 21:06:28.286575 1100921 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/test/nested/copy/1070309/hosts -> hosts in /etc/test/nested/copy/1070309
	I0528 21:06:28.286625 1100921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1070309
	I0528 21:06:28.295284 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem --> /etc/ssl/certs/10703092.pem (1708 bytes)
	I0528 21:06:28.320596 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/test/nested/copy/1070309/hosts --> /etc/test/nested/copy/1070309/hosts (40 bytes)
	I0528 21:06:28.343921 1100921 start.go:296] duration metric: took 172.329021ms for postStartSetup
	I0528 21:06:28.343990 1100921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:06:28.344027 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:06:28.361905 1100921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
	I0528 21:06:28.447408 1100921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0528 21:06:28.452024 1100921 fix.go:56] duration metric: took 2.303498774s for fixHost
	I0528 21:06:28.452039 1100921 start.go:83] releasing machines lock for "functional-409073", held for 2.303540217s
	I0528 21:06:28.452104 1100921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409073
	I0528 21:06:28.468490 1100921 ssh_runner.go:195] Run: cat /version.json
	I0528 21:06:28.468540 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:06:28.468555 1100921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 21:06:28.468608 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:06:28.485999 1100921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
	I0528 21:06:28.487133 1100921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
	I0528 21:06:28.573744 1100921 ssh_runner.go:195] Run: systemctl --version
	I0528 21:06:28.685544 1100921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 21:06:28.690644 1100921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0528 21:06:28.709472 1100921 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0528 21:06:28.709541 1100921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 21:06:28.720665 1100921 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0528 21:06:28.720685 1100921 start.go:494] detecting cgroup driver to use...
	I0528 21:06:28.720719 1100921 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 21:06:28.720824 1100921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:06:28.737542 1100921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 21:06:28.748214 1100921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 21:06:28.758104 1100921 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 21:06:28.758176 1100921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 21:06:28.768258 1100921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 21:06:28.778150 1100921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 21:06:28.788404 1100921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 21:06:28.798611 1100921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 21:06:28.807562 1100921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 21:06:28.817192 1100921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 21:06:28.826833 1100921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 21:06:28.836881 1100921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 21:06:28.845743 1100921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 21:06:28.854539 1100921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:06:28.976841 1100921 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 21:06:39.264718 1100921 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.287852933s)
	I0528 21:06:39.264735 1100921 start.go:494] detecting cgroup driver to use...
	I0528 21:06:39.264765 1100921 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 21:06:39.264812 1100921 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 21:06:39.280403 1100921 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0528 21:06:39.280467 1100921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 21:06:39.294342 1100921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 21:06:39.318223 1100921 ssh_runner.go:195] Run: which cri-dockerd
	I0528 21:06:39.321959 1100921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 21:06:39.331852 1100921 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 21:06:39.353237 1100921 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 21:06:39.468760 1100921 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 21:06:39.590269 1100921 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 21:06:39.590370 1100921 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 21:06:39.611612 1100921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:06:39.726132 1100921 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 21:06:40.242441 1100921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 21:06:40.256621 1100921 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0528 21:06:40.277021 1100921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 21:06:40.290211 1100921 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 21:06:40.383995 1100921 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 21:06:40.482400 1100921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:06:40.579594 1100921 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 21:06:40.595138 1100921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 21:06:40.607019 1100921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:06:40.705966 1100921 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 21:06:40.788276 1100921 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 21:06:40.788336 1100921 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 21:06:40.791979 1100921 start.go:562] Will wait 60s for crictl version
	I0528 21:06:40.792038 1100921 ssh_runner.go:195] Run: which crictl
	I0528 21:06:40.795508 1100921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 21:06:40.834086 1100921 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0528 21:06:40.834146 1100921 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 21:06:40.859132 1100921 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 21:06:40.883714 1100921 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0528 21:06:40.883822 1100921 cli_runner.go:164] Run: docker network inspect functional-409073 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 21:06:40.898105 1100921 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0528 21:06:40.903629 1100921 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0528 21:06:40.905467 1100921 kubeadm.go:877] updating cluster {Name:functional-409073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-409073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 21:06:40.905606 1100921 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 21:06:40.905681 1100921 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 21:06:40.923661 1100921 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409073
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0528 21:06:40.923673 1100921 docker.go:615] Images already preloaded, skipping extraction
	I0528 21:06:40.923742 1100921 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 21:06:40.940614 1100921 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409073
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0528 21:06:40.940628 1100921 cache_images.go:84] Images are preloaded, skipping loading
	I0528 21:06:40.940638 1100921 kubeadm.go:928] updating node { 192.168.49.2 8441 v1.30.1 docker true true} ...
	I0528 21:06:40.940756 1100921 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409073 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:functional-409073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 21:06:40.940819 1100921 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0528 21:06:40.984812 1100921 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0528 21:06:40.984887 1100921 cni.go:84] Creating CNI manager for ""
	I0528 21:06:40.984900 1100921 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 21:06:40.984908 1100921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 21:06:40.984927 1100921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409073 NodeName:functional-409073 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:
map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 21:06:40.985067 1100921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409073"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 21:06:40.985130 1100921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 21:06:40.993822 1100921 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 21:06:40.993882 1100921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 21:06:41.003532 1100921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0528 21:06:41.023358 1100921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 21:06:41.041888 1100921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2009 bytes)
	I0528 21:06:41.059851 1100921 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0528 21:06:41.063357 1100921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:06:41.150114 1100921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:06:41.162115 1100921 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073 for IP: 192.168.49.2
	I0528 21:06:41.162126 1100921 certs.go:194] generating shared ca certs ...
	I0528 21:06:41.162141 1100921 certs.go:226] acquiring lock for ca certs: {Name:mk5cb73d5e2c9c3b65010257baa77ed890ffd0a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:06:41.162292 1100921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.key
	I0528 21:06:41.162335 1100921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.key
	I0528 21:06:41.162343 1100921 certs.go:256] generating profile certs ...
	I0528 21:06:41.162424 1100921 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.key
	I0528 21:06:41.162472 1100921 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/apiserver.key.771678d1
	I0528 21:06:41.162507 1100921 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/proxy-client.key
	I0528 21:06:41.162615 1100921 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/1070309.pem (1338 bytes)
	W0528 21:06:41.162642 1100921 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/1070309_empty.pem, impossibly tiny 0 bytes
	I0528 21:06:41.162649 1100921 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 21:06:41.162673 1100921 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem (1078 bytes)
	I0528 21:06:41.162693 1100921 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem (1123 bytes)
	I0528 21:06:41.162715 1100921 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem (1679 bytes)
	I0528 21:06:41.162754 1100921 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem (1708 bytes)
	I0528 21:06:41.163423 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 21:06:41.187841 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 21:06:41.211685 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 21:06:41.235392 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 21:06:41.259322 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 21:06:41.286922 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 21:06:41.311123 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 21:06:41.356274 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 21:06:41.397502 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/1070309.pem --> /usr/share/ca-certificates/1070309.pem (1338 bytes)
	I0528 21:06:41.464930 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem --> /usr/share/ca-certificates/10703092.pem (1708 bytes)
	I0528 21:06:41.495472 1100921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 21:06:41.531641 1100921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 21:06:41.562774 1100921 ssh_runner.go:195] Run: openssl version
	I0528 21:06:41.573682 1100921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10703092.pem && ln -fs /usr/share/ca-certificates/10703092.pem /etc/ssl/certs/10703092.pem"
	I0528 21:06:41.585988 1100921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10703092.pem
	I0528 21:06:41.607181 1100921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 21:04 /usr/share/ca-certificates/10703092.pem
	I0528 21:06:41.607244 1100921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10703092.pem
	I0528 21:06:41.625230 1100921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10703092.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 21:06:41.652368 1100921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 21:06:41.663519 1100921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:06:41.672727 1100921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:57 /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:06:41.672784 1100921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 21:06:41.703869 1100921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 21:06:41.719480 1100921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1070309.pem && ln -fs /usr/share/ca-certificates/1070309.pem /etc/ssl/certs/1070309.pem"
	I0528 21:06:41.735406 1100921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1070309.pem
	I0528 21:06:41.750933 1100921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 21:04 /usr/share/ca-certificates/1070309.pem
	I0528 21:06:41.750987 1100921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1070309.pem
	I0528 21:06:41.765825 1100921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1070309.pem /etc/ssl/certs/51391683.0"
	I0528 21:06:41.781494 1100921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 21:06:41.793962 1100921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 21:06:41.805532 1100921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 21:06:41.826096 1100921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 21:06:41.840078 1100921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 21:06:41.849585 1100921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 21:06:41.861511 1100921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 21:06:41.870602 1100921 kubeadm.go:391] StartCluster: {Name:functional-409073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-409073 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:06:41.870737 1100921 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 21:06:41.901662 1100921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 21:06:41.915981 1100921 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 21:06:41.915993 1100921 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 21:06:41.915997 1100921 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 21:06:41.916044 1100921 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 21:06:41.928407 1100921 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:06:41.928980 1100921 kubeconfig.go:125] found "functional-409073" server: "https://192.168.49.2:8441"
	I0528 21:06:41.930419 1100921 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 21:06:41.950154 1100921 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-05-28 21:04:33.596827359 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-05-28 21:06:41.053778784 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0528 21:06:41.950164 1100921 kubeadm.go:1154] stopping kube-system containers ...
	I0528 21:06:41.950222 1100921 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 21:06:41.985048 1100921 docker.go:483] Stopping containers: [5b873d4310ab 26263380714d 5e8aaee2d22f a013361ac0d2 8d93f949d402 ef7c0f33ec3f b76c40aeee5a 5f0fa275fd91 e39786987b5e 70119a42f997 77a0372a090a 915d4e08f590 3f48065df0f0 11654ede906a 968fafca4aa7 59e78c8e61a3 d4fb94f026fd 59fec8d8a779 1f9f0b29ca4e 1de695a91d9a 21169c571c00 2e295177476d ad61f77bcd9b 4923760fae7a c67f8c93146f 13c3565ee483 8ac25a7550c1 0cad53f6fe58 1a6c35edf2ff bdf29373fe0c]
	I0528 21:06:41.985119 1100921 ssh_runner.go:195] Run: docker stop 5b873d4310ab 26263380714d 5e8aaee2d22f a013361ac0d2 8d93f949d402 ef7c0f33ec3f b76c40aeee5a 5f0fa275fd91 e39786987b5e 70119a42f997 77a0372a090a 915d4e08f590 3f48065df0f0 11654ede906a 968fafca4aa7 59e78c8e61a3 d4fb94f026fd 59fec8d8a779 1f9f0b29ca4e 1de695a91d9a 21169c571c00 2e295177476d ad61f77bcd9b 4923760fae7a c67f8c93146f 13c3565ee483 8ac25a7550c1 0cad53f6fe58 1a6c35edf2ff bdf29373fe0c
	I0528 21:06:42.611598 1100921 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0528 21:06:42.706432 1100921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 21:06:42.717958 1100921 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5651 May 28 21:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 May 28 21:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 May 28 21:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 May 28 21:04 /etc/kubernetes/scheduler.conf
	
	I0528 21:06:42.718033 1100921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0528 21:06:42.741729 1100921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0528 21:06:42.753463 1100921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0528 21:06:42.774160 1100921 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:06:42.774214 1100921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 21:06:42.786266 1100921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0528 21:06:42.797071 1100921 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0528 21:06:42.797126 1100921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 21:06:42.806317 1100921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 21:06:42.815743 1100921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:06:42.868223 1100921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:06:47.283102 1100921 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (4.414853307s)
	I0528 21:06:47.283120 1100921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:06:47.469779 1100921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:06:47.574952 1100921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:06:47.684104 1100921 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:06:47.684171 1100921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:06:48.184313 1100921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:06:48.685216 1100921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:06:48.701210 1100921 api_server.go:72] duration metric: took 1.017105979s to wait for apiserver process to appear ...
	I0528 21:06:48.701226 1100921 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:06:48.701249 1100921 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0528 21:06:48.701508 1100921 api_server.go:269] stopped: https://192.168.49.2:8441/healthz: Get "https://192.168.49.2:8441/healthz": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:06:49.202147 1100921 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0528 21:06:52.199149 1100921 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:06:52.199165 1100921 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:06:52.199179 1100921 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0528 21:06:52.259515 1100921 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:06:52.259532 1100921 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:06:52.259544 1100921 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0528 21:06:52.356727 1100921 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0528 21:06:52.356745 1100921 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0528 21:06:52.702256 1100921 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0528 21:06:52.709846 1100921 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:06:52.709862 1100921 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:06:53.201398 1100921 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0528 21:06:53.213656 1100921 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0528 21:06:53.213676 1100921 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0528 21:06:53.702075 1100921 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0528 21:06:53.712603 1100921 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0528 21:06:53.726680 1100921 api_server.go:141] control plane version: v1.30.1
	I0528 21:06:53.726699 1100921 api_server.go:131] duration metric: took 5.025468029s to wait for apiserver health ...
	I0528 21:06:53.726707 1100921 cni.go:84] Creating CNI manager for ""
	I0528 21:06:53.726719 1100921 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 21:06:53.729257 1100921 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 21:06:53.731742 1100921 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 21:06:53.741104 1100921 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 21:06:53.761012 1100921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:06:53.772750 1100921 system_pods.go:59] 7 kube-system pods found
	I0528 21:06:53.772780 1100921 system_pods.go:61] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0528 21:06:53.772787 1100921 system_pods.go:61] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0528 21:06:53.772795 1100921 system_pods.go:61] "kube-apiserver-functional-409073" [e607f425-416b-4a21-b0af-1eaa2ee6538a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 21:06:53.772802 1100921 system_pods.go:61] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0528 21:06:53.772807 1100921 system_pods.go:61] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0528 21:06:53.772812 1100921 system_pods.go:61] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0528 21:06:53.772817 1100921 system_pods.go:61] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:06:53.772822 1100921 system_pods.go:74] duration metric: took 11.800899ms to wait for pod list to return data ...
	I0528 21:06:53.772830 1100921 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:06:53.776532 1100921 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0528 21:06:53.776551 1100921 node_conditions.go:123] node cpu capacity is 2
	I0528 21:06:53.776561 1100921 node_conditions.go:105] duration metric: took 3.727566ms to run NodePressure ...
	I0528 21:06:53.776578 1100921 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0528 21:06:54.065819 1100921 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0528 21:06:54.073668 1100921 kubeadm.go:733] kubelet initialised
	I0528 21:06:54.073678 1100921 kubeadm.go:734] duration metric: took 7.847345ms waiting for restarted kubelet to initialise ...
	I0528 21:06:54.073686 1100921 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:06:54.079908 1100921 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-kbbld" in "kube-system" namespace to be "Ready" ...
	I0528 21:06:56.087285 1100921 pod_ready.go:102] pod "coredns-7db6d8ff4d-kbbld" in "kube-system" namespace has status "Ready":"False"
	I0528 21:06:57.086761 1100921 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbbld" in "kube-system" namespace has status "Ready":"True"
	I0528 21:06:57.086774 1100921 pod_ready.go:81] duration metric: took 3.006850379s for pod "coredns-7db6d8ff4d-kbbld" in "kube-system" namespace to be "Ready" ...
	I0528 21:06:57.086784 1100921 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-409073" in "kube-system" namespace to be "Ready" ...
	I0528 21:06:59.092746 1100921 pod_ready.go:102] pod "etcd-functional-409073" in "kube-system" namespace has status "Ready":"False"
	I0528 21:07:01.095314 1100921 pod_ready.go:102] pod "etcd-functional-409073" in "kube-system" namespace has status "Ready":"False"
	I0528 21:07:03.591982 1100921 pod_ready.go:102] pod "etcd-functional-409073" in "kube-system" namespace has status "Ready":"False"
	I0528 21:07:05.593162 1100921 pod_ready.go:102] pod "etcd-functional-409073" in "kube-system" namespace has status "Ready":"False"
	I0528 21:07:08.600460 1100921 pod_ready.go:97] error getting pod "etcd-functional-409073" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:08.600475 1100921 pod_ready.go:81] duration metric: took 11.513685319s for pod "etcd-functional-409073" in "kube-system" namespace to be "Ready" ...
	E0528 21:07:08.600485 1100921 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "etcd-functional-409073" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:08.600508 1100921 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-409073" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:08.600755 1100921 pod_ready.go:97] error getting pod "kube-apiserver-functional-409073" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:08.600766 1100921 pod_ready.go:81] duration metric: took 250.948µs for pod "kube-apiserver-functional-409073" in "kube-system" namespace to be "Ready" ...
	E0528 21:07:08.600775 1100921 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-functional-409073" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:08.600794 1100921 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-409073" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:08.601004 1100921 pod_ready.go:97] error getting pod "kube-controller-manager-functional-409073" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:08.601011 1100921 pod_ready.go:81] duration metric: took 210.735µs for pod "kube-controller-manager-functional-409073" in "kube-system" namespace to be "Ready" ...
	E0528 21:07:08.601017 1100921 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-functional-409073" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:08.601035 1100921 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xccsc" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:08.601183 1100921 pod_ready.go:97] error getting pod "kube-proxy-xccsc" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xccsc": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:08.601189 1100921 pod_ready.go:81] duration metric: took 149.329µs for pod "kube-proxy-xccsc" in "kube-system" namespace to be "Ready" ...
	E0528 21:07:08.601195 1100921 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-xccsc" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-xccsc": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:08.601210 1100921 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-409073" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:08.601414 1100921 pod_ready.go:97] error getting pod "kube-scheduler-functional-409073" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:08.601423 1100921 pod_ready.go:81] duration metric: took 207.323µs for pod "kube-scheduler-functional-409073" in "kube-system" namespace to be "Ready" ...
	E0528 21:07:08.601430 1100921 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-409073" in "kube-system" namespace (skipping!): Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:08.601447 1100921 pod_ready.go:38] duration metric: took 14.527750721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:07:08.601462 1100921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 21:07:08.608809 1100921 ops.go:34] apiserver oom_adj: -16
	I0528 21:07:08.608821 1100921 kubeadm.go:591] duration metric: took 26.692819252s to restartPrimaryControlPlane
	I0528 21:07:08.608830 1100921 kubeadm.go:393] duration metric: took 26.738237259s to StartCluster
	I0528 21:07:08.608844 1100921 settings.go:142] acquiring lock: {Name:mk9dd4e0f1e49f25e638e0ae0a582e344ec1255d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:07:08.608899 1100921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 21:07:08.609566 1100921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/kubeconfig: {Name:mk43b4b38c110ff2ffbd3a6de61be9ad6b977a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 21:07:08.609780 1100921 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 21:07:08.612677 1100921 out.go:177] * Verifying Kubernetes components...
	I0528 21:07:08.610045 1100921 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 21:07:08.610062 1100921 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 21:07:08.615357 1100921 addons.go:69] Setting storage-provisioner=true in profile "functional-409073"
	I0528 21:07:08.615380 1100921 addons.go:234] Setting addon storage-provisioner=true in "functional-409073"
	W0528 21:07:08.615385 1100921 addons.go:243] addon storage-provisioner should already be in state true
	I0528 21:07:08.615412 1100921 host.go:66] Checking if "functional-409073" exists ...
	I0528 21:07:08.615426 1100921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 21:07:08.615541 1100921 addons.go:69] Setting default-storageclass=true in profile "functional-409073"
	I0528 21:07:08.615558 1100921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-409073"
	I0528 21:07:08.615796 1100921 cli_runner.go:164] Run: docker container inspect functional-409073 --format={{.State.Status}}
	I0528 21:07:08.615832 1100921 cli_runner.go:164] Run: docker container inspect functional-409073 --format={{.State.Status}}
	I0528 21:07:08.638683 1100921 addons.go:234] Setting addon default-storageclass=true in "functional-409073"
	W0528 21:07:08.638694 1100921 addons.go:243] addon default-storageclass should already be in state true
	I0528 21:07:08.638716 1100921 host.go:66] Checking if "functional-409073" exists ...
	I0528 21:07:08.639089 1100921 cli_runner.go:164] Run: docker container inspect functional-409073 --format={{.State.Status}}
	I0528 21:07:08.675444 1100921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 21:07:08.675428 1100921 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 21:07:08.677189 1100921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 21:07:08.677204 1100921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 21:07:08.677212 1100921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 21:07:08.677265 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:07:08.677266 1100921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
	I0528 21:07:08.715031 1100921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
	I0528 21:07:08.725069 1100921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
	I0528 21:07:08.812482 1100921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 21:07:08.828787 1100921 node_ready.go:35] waiting up to 6m0s for node "functional-409073" to be "Ready" ...
	I0528 21:07:08.878195 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 21:07:08.895085 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:08.969413 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:08.969436 1100921 retry.go:31] will retry after 368.479791ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0528 21:07:08.981849 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:08.981870 1100921 retry.go:31] will retry after 263.548925ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:09.246396 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:09.317575 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:09.317603 1100921 retry.go:31] will retry after 419.976176ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:09.338480 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 21:07:09.403071 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:09.403092 1100921 retry.go:31] will retry after 539.523687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:09.738709 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:09.809936 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:09.809973 1100921 retry.go:31] will retry after 638.749821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:09.942821 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 21:07:10.011551 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:10.011576 1100921 retry.go:31] will retry after 394.611063ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:10.406628 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0528 21:07:10.449092 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:10.473225 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:10.473246 1100921 retry.go:31] will retry after 453.859168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0528 21:07:10.523755 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:10.523777 1100921 retry.go:31] will retry after 443.816958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:10.829373 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:10.927625 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0528 21:07:10.968227 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:11.000993 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:11.001017 1100921 retry.go:31] will retry after 1.854392101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0528 21:07:11.047162 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:11.047182 1100921 retry.go:31] will retry after 1.336651142s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:12.385025 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:12.452783 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:12.452803 1100921 retry.go:31] will retry after 969.3049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:12.856415 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 21:07:12.924456 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:12.924475 1100921 retry.go:31] will retry after 2.260136419s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:13.329274 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:13.422588 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:13.486067 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:13.486087 1100921 retry.go:31] will retry after 2.724779372s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:15.185609 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 21:07:15.255117 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:15.255138 1100921 retry.go:31] will retry after 2.740794383s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:15.329618 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:16.211260 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:16.280099 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:16.280121 1100921 retry.go:31] will retry after 4.925586513s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:17.830283 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:17.996656 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 21:07:18.069392 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:18.069414 1100921 retry.go:31] will retry after 5.368243141s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:20.329435 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:21.205922 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:21.268965 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:21.268986 1100921 retry.go:31] will retry after 6.670276018s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:22.330083 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:23.438558 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 21:07:23.509485 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:23.509516 1100921 retry.go:31] will retry after 7.998580489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:24.829345 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:26.829439 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:27.939672 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:28.007449 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:28.007472 1100921 retry.go:31] will retry after 5.587408551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:28.830342 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:31.329492 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:31.508937 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 21:07:31.573346 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:31.573367 1100921 retry.go:31] will retry after 11.522761937s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:33.330094 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:33.595444 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:33.678553 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:33.678586 1100921 retry.go:31] will retry after 14.925499409s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:35.830076 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:38.329448 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:40.330369 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:42.829390 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:43.096749 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0528 21:07:43.163949 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:43.163968 1100921 retry.go:31] will retry after 15.264757262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:44.830105 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:46.830335 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:48.604258 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 21:07:48.665849 1100921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:48.665870 1100921 retry.go:31] will retry after 21.390175001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0528 21:07:49.329321 1100921 node_ready.go:53] error getting node "functional-409073": Get "https://192.168.49.2:8441/api/v1/nodes/functional-409073": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:52.962124 1100921 node_ready.go:49] node "functional-409073" has status "Ready":"True"
	I0528 21:07:52.962137 1100921 node_ready.go:38] duration metric: took 44.133330079s for node "functional-409073" to be "Ready" ...
	I0528 21:07:52.962145 1100921 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:07:53.011520 1100921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbbld" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:53.038504 1100921 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbbld" in "kube-system" namespace has status "Ready":"True"
	I0528 21:07:53.038515 1100921 pod_ready.go:81] duration metric: took 26.979432ms for pod "coredns-7db6d8ff4d-kbbld" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:53.038525 1100921 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-409073" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:53.545545 1100921 pod_ready.go:92] pod "etcd-functional-409073" in "kube-system" namespace has status "Ready":"True"
	I0528 21:07:53.545557 1100921 pod_ready.go:81] duration metric: took 507.025866ms for pod "etcd-functional-409073" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:53.545566 1100921 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-409073" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:55.052060 1100921 pod_ready.go:97] node "functional-409073" hosting pod "kube-apiserver-functional-409073" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-409073" has status "Ready":"Unknown"
	I0528 21:07:55.052080 1100921 pod_ready.go:81] duration metric: took 1.506506885s for pod "kube-apiserver-functional-409073" in "kube-system" namespace to be "Ready" ...
	E0528 21:07:55.052090 1100921 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-409073" hosting pod "kube-apiserver-functional-409073" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-409073" has status "Ready":"Unknown"
	I0528 21:07:55.052112 1100921 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-409073" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:55.057983 1100921 pod_ready.go:97] node "functional-409073" hosting pod "kube-controller-manager-functional-409073" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-409073" has status "Ready":"Unknown"
	I0528 21:07:55.058036 1100921 pod_ready.go:81] duration metric: took 5.880532ms for pod "kube-controller-manager-functional-409073" in "kube-system" namespace to be "Ready" ...
	E0528 21:07:55.058046 1100921 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-409073" hosting pod "kube-controller-manager-functional-409073" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-409073" has status "Ready":"Unknown"
	I0528 21:07:55.058066 1100921 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xccsc" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:55.063996 1100921 pod_ready.go:97] node "functional-409073" hosting pod "kube-proxy-xccsc" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-409073" has status "Ready":"Unknown"
	I0528 21:07:55.064011 1100921 pod_ready.go:81] duration metric: took 5.936301ms for pod "kube-proxy-xccsc" in "kube-system" namespace to be "Ready" ...
	E0528 21:07:55.064019 1100921 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-409073" hosting pod "kube-proxy-xccsc" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-409073" has status "Ready":"Unknown"
	I0528 21:07:55.064042 1100921 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-409073" in "kube-system" namespace to be "Ready" ...
	I0528 21:07:55.165345 1100921 pod_ready.go:97] node "functional-409073" hosting pod "kube-scheduler-functional-409073" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-409073" has status "Ready":"Unknown"
	I0528 21:07:55.165360 1100921 pod_ready.go:81] duration metric: took 101.31083ms for pod "kube-scheduler-functional-409073" in "kube-system" namespace to be "Ready" ...
	E0528 21:07:55.165369 1100921 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-409073" hosting pod "kube-scheduler-functional-409073" in "kube-system" namespace is currently not "Ready" (skipping!): node "functional-409073" has status "Ready":"Unknown"
	I0528 21:07:55.165388 1100921 pod_ready.go:38] duration metric: took 2.203233495s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 21:07:55.165405 1100921 api_server.go:52] waiting for apiserver process to appear ...
	I0528 21:07:55.165466 1100921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:07:55.178552 1100921 api_server.go:72] duration metric: took 46.568744363s to wait for apiserver process to appear ...
	I0528 21:07:55.178575 1100921 api_server.go:88] waiting for apiserver healthz status ...
	I0528 21:07:55.178612 1100921 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0528 21:07:55.187514 1100921 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0528 21:07:55.188617 1100921 api_server.go:141] control plane version: v1.30.1
	I0528 21:07:55.188631 1100921 api_server.go:131] duration metric: took 10.05082ms to wait for apiserver health ...
	I0528 21:07:55.188638 1100921 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 21:07:55.368735 1100921 system_pods.go:59] 7 kube-system pods found
	I0528 21:07:55.368750 1100921 system_pods.go:61] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:07:55.368754 1100921 system_pods.go:61] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:07:55.368758 1100921 system_pods.go:61] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:07:55.368761 1100921 system_pods.go:61] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:07:55.368764 1100921 system_pods.go:61] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:07:55.368767 1100921 system_pods.go:61] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:07:55.368773 1100921 system_pods.go:61] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:07:55.368779 1100921 system_pods.go:74] duration metric: took 180.135822ms to wait for pod list to return data ...
	I0528 21:07:55.368787 1100921 default_sa.go:34] waiting for default service account to be created ...
	I0528 21:07:55.565792 1100921 default_sa.go:45] found service account: "default"
	I0528 21:07:55.565806 1100921 default_sa.go:55] duration metric: took 197.013728ms for default service account to be created ...
	I0528 21:07:55.565814 1100921 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 21:07:55.768928 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:07:55.768959 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:07:55.768964 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:07:55.768968 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:07:55.768972 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:07:55.768975 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:07:55.768979 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:07:55.768986 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:07:55.768998 1100921 retry.go:31] will retry after 266.447702ms: missing components: kube-apiserver
	I0528 21:07:56.041661 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:07:56.041675 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:07:56.041680 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:07:56.041683 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:07:56.041687 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:07:56.041689 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:07:56.041692 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:07:56.041699 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:07:56.041713 1100921 retry.go:31] will retry after 323.731299ms: missing components: kube-apiserver
	I0528 21:07:56.372520 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:07:56.372535 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:07:56.372540 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:07:56.372543 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:07:56.372547 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:07:56.372552 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:07:56.372555 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:07:56.372560 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:07:56.372573 1100921 retry.go:31] will retry after 368.984877ms: missing components: kube-apiserver
	I0528 21:07:56.748084 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:07:56.748101 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:07:56.748106 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:07:56.748110 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:07:56.748113 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:07:56.748116 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:07:56.748119 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:07:56.748125 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:07:56.748138 1100921 retry.go:31] will retry after 532.417819ms: missing components: kube-apiserver
	I0528 21:07:57.286544 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:07:57.286560 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:07:57.286565 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:07:57.286568 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:07:57.286572 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:07:57.286576 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:07:57.286579 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:07:57.286585 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:07:57.286598 1100921 retry.go:31] will retry after 751.984331ms: missing components: kube-apiserver
	I0528 21:07:58.045235 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:07:58.045250 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:07:58.045254 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:07:58.045258 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:07:58.045261 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:07:58.045264 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:07:58.045267 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:07:58.045273 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:07:58.045286 1100921 retry.go:31] will retry after 903.529843ms: missing components: kube-apiserver
	I0528 21:07:58.429850 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0528 21:07:58.955361 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:07:58.955375 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:07:58.955380 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:07:58.955384 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:07:58.955387 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:07:58.955390 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:07:58.955393 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:07:58.955399 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:07:58.955413 1100921 retry.go:31] will retry after 1.100576017s: missing components: kube-apiserver
	I0528 21:08:00.067008 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:08:00.067026 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:08:00.067031 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:08:00.067035 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:08:00.067039 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:08:00.067043 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:08:00.067047 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:08:00.067052 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:08:00.067068 1100921 retry.go:31] will retry after 1.261162334s: missing components: kube-apiserver
	I0528 21:08:01.335739 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:08:01.335756 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:08:01.335761 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:08:01.335768 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:08:01.335779 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:08:01.335783 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:08:01.335787 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:08:01.335794 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:08:01.335811 1100921 retry.go:31] will retry after 1.602849072s: missing components: kube-apiserver
	I0528 21:08:02.945151 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:08:02.945166 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:08:02.945171 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:08:02.945174 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:08:02.945178 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:08:02.945181 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:08:02.945184 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:08:02.945189 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:08:02.945203 1100921 retry.go:31] will retry after 2.174802516s: missing components: kube-apiserver
	I0528 21:08:05.127316 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:08:05.127332 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:08:05.127336 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:08:05.127342 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:08:05.127346 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:08:05.127348 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:08:05.127352 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:08:05.127357 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:08:05.127371 1100921 retry.go:31] will retry after 2.563434738s: missing components: kube-apiserver
	I0528 21:08:07.697163 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:08:07.697177 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:08:07.697182 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:08:07.697185 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:08:07.697188 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:08:07.697191 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:08:07.697194 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:08:07.697200 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:08:07.697214 1100921 retry.go:31] will retry after 3.469657112s: missing components: kube-apiserver
	I0528 21:08:10.056778 1100921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 21:08:10.724233 1100921 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0528 21:08:10.725855 1100921 addons.go:510] duration metric: took 1m2.11578914s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0528 21:08:11.173899 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:08:11.173913 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:08:11.173918 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:08:11.173921 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:08:11.173925 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:08:11.173928 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:08:11.173931 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:08:11.173937 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:08:11.173952 1100921 retry.go:31] will retry after 3.238867192s: missing components: kube-apiserver
	I0528 21:08:14.419720 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:08:14.419735 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:08:14.419740 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:08:14.419743 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:08:14.419746 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:08:14.419749 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:08:14.419752 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:08:14.419758 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0528 21:08:14.419771 1100921 retry.go:31] will retry after 4.467955994s: missing components: kube-apiserver
	I0528 21:08:18.895714 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:08:18.895730 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:08:18.895735 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:08:18.895739 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Pending
	I0528 21:08:18.895743 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:08:18.895746 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:08:18.895749 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:08:18.895752 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running
	I0528 21:08:18.895765 1100921 retry.go:31] will retry after 5.924811748s: missing components: kube-apiserver
	I0528 21:08:24.829017 1100921 system_pods.go:86] 7 kube-system pods found
	I0528 21:08:24.829034 1100921 system_pods.go:89] "coredns-7db6d8ff4d-kbbld" [d17f8ccf-80ec-40ee-8d1e-dde4d845ac72] Running
	I0528 21:08:24.829039 1100921 system_pods.go:89] "etcd-functional-409073" [cd3d383e-f677-4847-a519-08df0d96da85] Running
	I0528 21:08:24.829047 1100921 system_pods.go:89] "kube-apiserver-functional-409073" [cf825ac1-8ecf-4916-b117-551f96a5f312] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0528 21:08:24.829051 1100921 system_pods.go:89] "kube-controller-manager-functional-409073" [6ad96fc4-de18-4256-a03e-beb511453220] Running
	I0528 21:08:24.829056 1100921 system_pods.go:89] "kube-proxy-xccsc" [1387ca0a-f76d-4683-8c7c-4a2b10f65923] Running
	I0528 21:08:24.829059 1100921 system_pods.go:89] "kube-scheduler-functional-409073" [a6679afc-122c-4584-a280-3d4ebc5a1834] Running
	I0528 21:08:24.829062 1100921 system_pods.go:89] "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running
	I0528 21:08:24.829068 1100921 system_pods.go:126] duration metric: took 29.263249456s to wait for k8s-apps to be running ...
	I0528 21:08:24.829075 1100921 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 21:08:24.829136 1100921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:08:24.840801 1100921 system_svc.go:56] duration metric: took 11.705418ms WaitForService to wait for kubelet
	I0528 21:08:24.840820 1100921 kubeadm.go:576] duration metric: took 1m16.231019004s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 21:08:24.840840 1100921 node_conditions.go:102] verifying NodePressure condition ...
	I0528 21:08:24.844025 1100921 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0528 21:08:24.844041 1100921 node_conditions.go:123] node cpu capacity is 2
	I0528 21:08:24.844050 1100921 node_conditions.go:105] duration metric: took 3.206397ms to run NodePressure ...
	I0528 21:08:24.844062 1100921 start.go:240] waiting for startup goroutines ...
	I0528 21:08:24.844069 1100921 start.go:245] waiting for cluster config update ...
	I0528 21:08:24.844078 1100921 start.go:254] writing updated cluster config ...
	I0528 21:08:24.844391 1100921 ssh_runner.go:195] Run: rm -f paused
	I0528 21:08:24.899763 1100921 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 21:08:24.908018 1100921 out.go:177] * Done! kubectl is now configured to use "functional-409073" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 28 21:06:42 functional-409073 dockerd[6508]: time="2024-05-28T21:06:42.522509813Z" level=info msg="ignoring event" container=26263380714d1b51db363c3b4c9b73843e9aae4d399c20f95a61d9f984757cea module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:06:42 functional-409073 dockerd[6508]: time="2024-05-28T21:06:42.545495092Z" level=info msg="ignoring event" container=a013361ac0d287379cc17dd3fac2f4178f2634397b2b556eba25cf35e8334899 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:06:42 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:42Z" level=info msg="Cannot create symbolic link because container log file doesn't exist!"
	May 28 21:06:42 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b58c7255bbf9000629ac5f6a3d2ba4a8130a8ca4658b672f3890412c672c9185/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	May 28 21:06:42 functional-409073 cri-dockerd[6741]: W0528 21:06:42.775233    6741 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	May 28 21:06:42 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/48aa1421c1181d7974562d4f83e8ecc509a66d19be808bc5ff3b881112c18933/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	May 28 21:06:42 functional-409073 cri-dockerd[6741]: W0528 21:06:42.790947    6741 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	May 28 21:06:47 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-kbbld_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a013361ac0d287379cc17dd3fac2f4178f2634397b2b556eba25cf35e8334899\""
	May 28 21:06:48 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:48Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"59fec8d8a779cb4f2a3180b4ebdfc889c1b8823de9f926003642f102dfe18598\". Proceed without further sandbox information."
	May 28 21:06:48 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:48Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"0cad53f6fe58372dd966e068088e0be46ffc8fc46c17e67605799affcdf51cb8\". Proceed without further sandbox information."
	May 28 21:06:48 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:48Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"8ac25a7550c1c95b0d32ecc669f1d6d04b6885945dda0356a874bb18e2fd7085\". Proceed without further sandbox information."
	May 28 21:06:48 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b44539549ac5fb312a1728d3bd33e80616ae28d382ef0791a8e4839303dbf730/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	May 28 21:06:48 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9b6c8b994922bb519b438a781df8c928fc8b16b6c240071c743a1fd6152b0352/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options trust-ad ndots:0 edns0]"
	May 28 21:06:48 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-kbbld_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a013361ac0d287379cc17dd3fac2f4178f2634397b2b556eba25cf35e8334899\""
	May 28 21:06:52 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:52Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 28 21:06:53 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b9ae3589ac39df5375c15b100e0cdbc6c98465442e805a25032d1a4180efd09/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options trust-ad ndots:0 edns0]"
	May 28 21:06:53 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf0d8cad4f9d8ff6a63fb3bbc68805b9a0484830fc65aebcdce8685f9f378c5f/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options trust-ad ndots:0 edns0]"
	May 28 21:06:53 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:06:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/136d307cc6348a3282f1a4aaa30055ea936d189e6decbcb864f970ff808e02f1/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options ndots:0 edns0 trust-ad]"
	May 28 21:06:53 functional-409073 dockerd[6508]: time="2024-05-28T21:06:53.474362432Z" level=info msg="ignoring event" container=5f71d900fd23b73f62b0c66f714081185ce997af5c4d4f04bc8c542a8e157fd4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:07:08 functional-409073 dockerd[6508]: time="2024-05-28T21:07:08.845918754Z" level=info msg="ignoring event" container=240555ae05687a31076c57f330282188f9cfad57c9314945dda2c7c2d1ea4af8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:07:34 functional-409073 dockerd[6508]: time="2024-05-28T21:07:34.751604006Z" level=info msg="ignoring event" container=50b35e311bffc8ae4c0eb338a3139b682f471dc8a584ecf17b30b1dd4e800a45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:07:37 functional-409073 dockerd[6508]: time="2024-05-28T21:07:37.575062683Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=d2b73e57216b03d567295ac692a4c6a933a7b270230b76c51658240878e5da74 spanID=63cab5dd06467ab9 traceID=5bb463497ee830b274689120a7649192
	May 28 21:07:37 functional-409073 dockerd[6508]: time="2024-05-28T21:07:37.635422116Z" level=info msg="ignoring event" container=d2b73e57216b03d567295ac692a4c6a933a7b270230b76c51658240878e5da74 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:07:37 functional-409073 dockerd[6508]: time="2024-05-28T21:07:37.683621791Z" level=info msg="ignoring event" container=b44539549ac5fb312a1728d3bd33e80616ae28d382ef0791a8e4839303dbf730 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 28 21:07:49 functional-409073 cri-dockerd[6741]: time="2024-05-28T21:07:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6d6ee33e4455a9d674718c9f9ee14815a8959d72be67062283a67c397002419e/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a07c2a9627f4b       ba04bb24b9575       8 seconds ago        Running             storage-provisioner       6                   bf0d8cad4f9d8       storage-provisioner
	60158aefc077c       988b55d423baf       36 seconds ago       Running             kube-apiserver            0                   6d6ee33e4455a       kube-apiserver-functional-409073
	50b35e311bffc       ba04bb24b9575       51 seconds ago       Exited              storage-provisioner       5                   bf0d8cad4f9d8       storage-provisioner
	22f18b42ac572       2437cf7621777       About a minute ago   Running             coredns                   2                   136d307cc6348       coredns-7db6d8ff4d-kbbld
	e60e636767256       05eccb821e159       About a minute ago   Running             kube-proxy                3                   6b9ae3589ac39       kube-proxy-xccsc
	b29876106d883       014faa467e297       About a minute ago   Running             etcd                      3                   b58c7255bbf90       etcd-functional-409073
	b8026f72530b3       234ac56e455be       About a minute ago   Running             kube-controller-manager   3                   48aa1421c1181       kube-controller-manager-functional-409073
	950b8ae098b4a       163ff818d154d       About a minute ago   Running             kube-scheduler            3                   9b6c8b994922b       kube-scheduler-functional-409073
	1cf83eb6ff2cc       163ff818d154d       About a minute ago   Created             kube-scheduler            2                   8d93f949d4026       kube-scheduler-functional-409073
	52dcdd83edcee       05eccb821e159       About a minute ago   Created             kube-proxy                2                   ef7c0f33ec3ff       kube-proxy-xccsc
	5b873d4310abf       014faa467e297       About a minute ago   Created             etcd                      2                   5f0fa275fd911       etcd-functional-409073
	26263380714d1       234ac56e455be       About a minute ago   Exited              kube-controller-manager   2                   e39786987b5e9       kube-controller-manager-functional-409073
	70119a42f9977       2437cf7621777       2 minutes ago        Exited              coredns                   1                   d4fb94f026fdb       coredns-7db6d8ff4d-kbbld
	
	
	==> coredns [22f18b42ac57] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:39498 - 43277 "HINFO IN 760479440793387561.7962417111947291173. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012857308s
	
	
	==> coredns [70119a42f997] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:47829 - 49269 "HINFO IN 3737929640275913981.4205202249432992484. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023225865s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-409073
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-409073
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=functional-409073
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_04_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:04:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-409073
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 21:08:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 21:08:03 +0000   Tue, 28 May 2024 21:08:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 21:08:03 +0000   Tue, 28 May 2024 21:08:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 21:08:03 +0000   Tue, 28 May 2024 21:08:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 21:08:03 +0000   Tue, 28 May 2024 21:08:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-409073
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	System Info:
	  Machine ID:                 b0bdc9e35c74408a843853af05767875
	  System UUID:                987dd773-8d89-4bc9-b577-7d8afe08b08f
	  Boot ID:                    869fd7c8-60a7-4ae5-b10f-ba225f4e7da7
	  Kernel Version:             5.15.0-1062-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-kbbld                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m23s
	  kube-system                 etcd-functional-409073                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-apiserver-functional-409073             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-controller-manager-functional-409073    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-proxy-xccsc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	  kube-system                 kube-scheduler-functional-409073             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 92s                kube-proxy       
	  Normal   Starting                 2m19s              kube-proxy       
	  Normal   Starting                 3m21s              kube-proxy       
	  Normal   Starting                 3m38s              kubelet          Starting kubelet.
	  Normal   NodeReady                3m37s              kubelet          Node functional-409073 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  3m37s              kubelet          Node functional-409073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m37s              kubelet          Node functional-409073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m37s              kubelet          Node functional-409073 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             3m37s              kubelet          Node functional-409073 status is now: NodeNotReady
	  Normal   NodeAllocatableEnforced  3m37s              kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m24s              node-controller  Node functional-409073 event: Registered Node functional-409073 in Controller
	  Warning  ContainerGCFailed        2m37s              kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   RegisteredNode           2m7s               node-controller  Node functional-409073 event: Registered Node functional-409073 in Controller
	  Normal   Starting                 99s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  99s (x8 over 99s)  kubelet          Node functional-409073 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    99s (x8 over 99s)  kubelet          Node functional-409073 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     99s (x7 over 99s)  kubelet          Node functional-409073 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           82s                node-controller  Node functional-409073 event: Registered Node functional-409073 in Controller
	  Normal   NodeNotReady             32s                node-controller  Node functional-409073 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000694] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000939] FS-Cache: N-cookie d=000000001a3d9142{9p.inode} n=00000000920e9926
	[  +0.001060] FS-Cache: N-key=[8] '1871ed0000000000'
	[  +0.003693] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001003] FS-Cache: O-cookie d=000000001a3d9142{9p.inode} n=0000000086b8c112
	[  +0.001090] FS-Cache: O-key=[8] '1871ed0000000000'
	[  +0.000789] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001035] FS-Cache: N-cookie d=000000001a3d9142{9p.inode} n=000000007762c4e2
	[  +0.001070] FS-Cache: N-key=[8] '1871ed0000000000'
	[  +2.335946] FS-Cache: Duplicate cookie detected
	[  +0.000740] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.000979] FS-Cache: O-cookie d=000000001a3d9142{9p.inode} n=000000005818c61c
	[  +0.001079] FS-Cache: O-key=[8] '1771ed0000000000'
	[  +0.000813] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000990] FS-Cache: N-cookie d=000000001a3d9142{9p.inode} n=00000000122b7051
	[  +0.001090] FS-Cache: N-key=[8] '1771ed0000000000'
	[  +0.351994] FS-Cache: Duplicate cookie detected
	[  +0.000700] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.001058] FS-Cache: O-cookie d=000000001a3d9142{9p.inode} n=0000000006f007f4
	[  +0.001125] FS-Cache: O-key=[8] '1d71ed0000000000'
	[  +0.000753] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000966] FS-Cache: N-cookie d=000000001a3d9142{9p.inode} n=00000000920e9926
	[  +0.001088] FS-Cache: N-key=[8] '1d71ed0000000000'
	[May28 20:29] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [5b873d4310ab] <==
	
	
	==> etcd [b29876106d88] <==
	{"level":"info","ts":"2024-05-28T21:06:49.502147Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:06:49.502323Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-28T21:06:49.50464Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-05-28T21:06:49.526106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-28T21:06:49.510212Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:06:49.510858Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-28T21:06:49.510826Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-28T21:06:49.526414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-28T21:06:49.52683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-28T21:06:49.526924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-05-28T21:06:49.527007Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-05-28T21:06:49.53408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-05-28T21:06:49.527075Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-28T21:06:49.526688Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-28T21:06:49.527124Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-28T21:06:49.526538Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:06:49.53435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-05-28T21:06:49.534792Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-28T21:06:49.540103Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-409073 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-28T21:06:49.5404Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-28T21:06:49.540434Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:06:49.540421Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-28T21:06:49.554079Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-28T21:06:49.555746Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-28T21:06:49.560907Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 21:08:26 up  4:50,  0 users,  load average: 0.90, 1.52, 2.10
	Linux functional-409073 5.15.0-1062-aws #68~20.04.1-Ubuntu SMP Tue May 7 11:50:35 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [60158aefc077] <==
	I0528 21:07:52.814196       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0528 21:07:52.814284       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0528 21:07:52.814373       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0528 21:07:52.814775       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0528 21:07:52.814874       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0528 21:07:53.011988       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0528 21:07:53.012020       1 policy_source.go:224] refreshing policies
	I0528 21:07:53.012072       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0528 21:07:53.012084       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0528 21:07:53.017919       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0528 21:07:53.019725       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0528 21:07:53.020032       1 aggregator.go:165] initial CRD sync complete...
	I0528 21:07:53.020122       1 autoregister_controller.go:141] Starting autoregister controller
	I0528 21:07:53.020222       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0528 21:07:53.020301       1 cache.go:39] Caches are synced for autoregister controller
	I0528 21:07:53.025542       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0528 21:07:53.025619       1 shared_informer.go:320] Caches are synced for configmaps
	I0528 21:07:53.029580       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0528 21:07:53.030336       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0528 21:07:53.035876       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0528 21:07:53.105057       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0528 21:07:53.813614       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0528 21:07:54.137434       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0528 21:07:54.139142       1 controller.go:615] quota admission added evaluator for: endpoints
	I0528 21:07:54.144750       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [26263380714d] <==
	
	
	==> kube-controller-manager [b8026f72530b] <==
	E0528 21:07:34.894276       1 resource_quota_controller.go:440] failed to discover resources: Get "https://192.168.49.2:8441/api": dial tcp 192.168.49.2:8441: connect: connection refused
	I0528 21:07:35.331080       1 garbagecollector.go:828] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.168.49.2:8441/api\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0528 21:07:44.876923       1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.49.2:8441/api/v1/nodes/functional-409073/status\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="node-lifecycle-controller" node="functional-409073"
	E0528 21:07:44.877297       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="functional-409073"
	E0528 21:07:44.877325       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.49.2:8441/api/v1/nodes/functional-409073\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="node-lifecycle-controller" node=""
	I0528 21:07:49.878354       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	E0528 21:07:49.878860       1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.49.2:8441/api/v1/nodes/functional-409073/status\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="node-lifecycle-controller" node="functional-409073"
	E0528 21:07:49.879032       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="functional-409073"
	E0528 21:07:49.879049       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.49.2:8441/api/v1/nodes/functional-409073\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="node-lifecycle-controller" node=""
	E0528 21:07:52.930295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v2.HorizontalPodAutoscaler: unknown (get horizontalpodautoscalers.autoscaling)
	E0528 21:07:52.930351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)
	I0528 21:07:54.880201       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0528 21:07:54.906396       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-apiserver-functional-409073" err="Operation cannot be fulfilled on pods \"kube-apiserver-functional-409073\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-409073, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e607f425-416b-4a21-b0af-1eaa2ee6538a, UID in object meta: cf825ac1-8ecf-4916-b117-551f96a5f312"
	E0528 21:07:54.926435       1 node_lifecycle_controller.go:753] unable to mark all pods NotReady on node functional-409073: Operation cannot be fulfilled on pods "kube-apiserver-functional-409073": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-409073, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e607f425-416b-4a21-b0af-1eaa2ee6538a, UID in object meta: cf825ac1-8ecf-4916-b117-551f96a5f312; queuing for retry
	I0528 21:07:54.926771       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	E0528 21:07:59.932845       1 node_lifecycle_controller.go:973] "Error updating node" err="Operation cannot be fulfilled on nodes \"functional-409073\": the object has been modified; please apply your changes to the latest version and try again" logger="node-lifecycle-controller" node="functional-409073"
	I0528 21:07:59.960875       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/coredns-7db6d8ff4d-kbbld" err="Operation cannot be fulfilled on pods \"coredns-7db6d8ff4d-kbbld\": the object has been modified; please apply your changes to the latest version and try again"
	I0528 21:07:59.963358       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-apiserver-functional-409073" err="Operation cannot be fulfilled on pods \"kube-apiserver-functional-409073\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-409073, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e607f425-416b-4a21-b0af-1eaa2ee6538a, UID in object meta: cf825ac1-8ecf-4916-b117-551f96a5f312"
	I0528 21:07:59.969409       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-controller-manager-functional-409073" err="Operation cannot be fulfilled on pods \"kube-controller-manager-functional-409073\": the object has been modified; please apply your changes to the latest version and try again"
	I0528 21:07:59.975588       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-proxy-xccsc" err="Operation cannot be fulfilled on pods \"kube-proxy-xccsc\": the object has been modified; please apply your changes to the latest version and try again"
	E0528 21:07:59.975681       1 node_lifecycle_controller.go:753] unable to mark all pods NotReady on node functional-409073: [Operation cannot be fulfilled on pods "coredns-7db6d8ff4d-kbbld": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods "kube-apiserver-functional-409073": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-409073, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e607f425-416b-4a21-b0af-1eaa2ee6538a, UID in object meta: cf825ac1-8ecf-4916-b117-551f96a5f312, Operation cannot be fulfilled on pods "kube-controller-manager-functional-409073": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods "kube-proxy-xccsc": the object has been modified; please apply your changes to the latest version and try again]; queuing for retry
	E0528 21:08:04.981008       1 node_lifecycle_controller.go:973] "Error updating node" err="Operation cannot be fulfilled on nodes \"functional-409073\": the object has been modified; please apply your changes to the latest version and try again" logger="node-lifecycle-controller" node="functional-409073"
	I0528 21:08:05.004235       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0528 21:08:11.408936       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.47711ms"
	I0528 21:08:13.456952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.17µs"
	
	
	==> kube-proxy [52dcdd83edce] <==
	
	
	==> kube-proxy [e60e63676725] <==
	I0528 21:06:53.388008       1 server_linux.go:69] "Using iptables proxy"
	I0528 21:06:53.422459       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0528 21:06:53.471081       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0528 21:06:53.471134       1 server_linux.go:165] "Using iptables Proxier"
	I0528 21:06:53.476677       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0528 21:06:53.476700       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0528 21:06:53.476731       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0528 21:06:53.476933       1 server.go:872] "Version info" version="v1.30.1"
	I0528 21:06:53.476948       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:06:53.483297       1 config.go:192] "Starting service config controller"
	I0528 21:06:53.483318       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0528 21:06:53.483366       1 config.go:101] "Starting endpoint slice config controller"
	I0528 21:06:53.483371       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0528 21:06:53.483948       1 config.go:319] "Starting node config controller"
	I0528 21:06:53.483956       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0528 21:06:53.584875       1 shared_informer.go:320] Caches are synced for node config
	I0528 21:06:53.584909       1 shared_informer.go:320] Caches are synced for service config
	I0528 21:06:53.584944       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1cf83eb6ff2c] <==
	
	
	==> kube-scheduler [950b8ae098b4] <==
	I0528 21:06:50.915657       1 serving.go:380] Generated self-signed cert in-memory
	W0528 21:06:52.214861       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 21:06:52.214889       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 21:06:52.214909       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 21:06:52.214916       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 21:06:52.339899       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0528 21:06:52.340111       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0528 21:06:52.341933       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0528 21:06:52.342254       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0528 21:06:52.342358       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 21:06:52.342456       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0528 21:06:52.442911       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0528 21:07:52.922456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0528 21:07:52.922743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	
	
	==> kubelet <==
	May 28 21:07:43 functional-409073 kubelet[7889]: E0528 21:07:43.268508    7889 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-409073\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-409073?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 28 21:07:43 functional-409073 kubelet[7889]: E0528 21:07:43.268667    7889 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-409073\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-409073?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 28 21:07:43 functional-409073 kubelet[7889]: E0528 21:07:43.268690    7889 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	May 28 21:07:46 functional-409073 kubelet[7889]: E0528 21:07:46.725490    7889 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-409073?timeout=10s\": dial tcp 192.168.49.2:8441: connect: connection refused" interval="7s"
	May 28 21:07:47 functional-409073 kubelet[7889]: I0528 21:07:47.624512    7889 status_manager.go:853] "Failed to get status for pod" podUID="eb53a934a00846d05eecadd867829f60" pod="kube-system/etcd-functional-409073" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-409073\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 28 21:07:47 functional-409073 kubelet[7889]: I0528 21:07:47.624714    7889 status_manager.go:853] "Failed to get status for pod" podUID="29db9576-4483-4105-900d-513a47525eff" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 28 21:07:47 functional-409073 kubelet[7889]: I0528 21:07:47.624876    7889 status_manager.go:853] "Failed to get status for pod" podUID="3b6057cbc4a8389894a29d3f4496a06e" pod="kube-system/kube-scheduler-functional-409073" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-409073\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 28 21:07:49 functional-409073 kubelet[7889]: I0528 21:07:49.624610    7889 scope.go:117] "RemoveContainer" containerID="50b35e311bffc8ae4c0eb338a3139b682f471dc8a584ecf17b30b1dd4e800a45"
	May 28 21:07:49 functional-409073 kubelet[7889]: E0528 21:07:49.624853    7889 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(29db9576-4483-4105-900d-513a47525eff)\"" pod="kube-system/storage-provisioner" podUID="29db9576-4483-4105-900d-513a47525eff"
	May 28 21:07:49 functional-409073 kubelet[7889]: I0528 21:07:49.626345    7889 status_manager.go:853] "Failed to get status for pod" podUID="3b6057cbc4a8389894a29d3f4496a06e" pod="kube-system/kube-scheduler-functional-409073" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-409073\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 28 21:07:49 functional-409073 kubelet[7889]: I0528 21:07:49.628069    7889 status_manager.go:853] "Failed to get status for pod" podUID="eb53a934a00846d05eecadd867829f60" pod="kube-system/etcd-functional-409073" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-409073\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 28 21:07:49 functional-409073 kubelet[7889]: I0528 21:07:49.628658    7889 status_manager.go:853] "Failed to get status for pod" podUID="29db9576-4483-4105-900d-513a47525eff" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 28 21:07:49 functional-409073 kubelet[7889]: I0528 21:07:49.629343    7889 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-409073" podUID="e607f425-416b-4a21-b0af-1eaa2ee6538a"
	May 28 21:07:49 functional-409073 kubelet[7889]: I0528 21:07:49.630309    7889 status_manager.go:853] "Failed to get status for pod" podUID="3b6057cbc4a8389894a29d3f4496a06e" pod="kube-system/kube-scheduler-functional-409073" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-409073\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 28 21:07:49 functional-409073 kubelet[7889]: E0528 21:07:49.630529    7889 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-409073\": dial tcp 192.168.49.2:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-409073"
	May 28 21:07:49 functional-409073 kubelet[7889]: I0528 21:07:49.630942    7889 status_manager.go:853] "Failed to get status for pod" podUID="eb53a934a00846d05eecadd867829f60" pod="kube-system/etcd-functional-409073" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/etcd-functional-409073\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 28 21:07:49 functional-409073 kubelet[7889]: I0528 21:07:49.631283    7889 status_manager.go:853] "Failed to get status for pod" podUID="29db9576-4483-4105-900d-513a47525eff" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.49.2:8441: connect: connection refused"
	May 28 21:07:49 functional-409073 kubelet[7889]: I0528 21:07:49.942192    7889 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-409073" podUID="e607f425-416b-4a21-b0af-1eaa2ee6538a"
	May 28 21:07:52 functional-409073 kubelet[7889]: E0528 21:07:52.907582    7889 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	May 28 21:07:53 functional-409073 kubelet[7889]: I0528 21:07:53.067206    7889 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-409073"
	May 28 21:07:53 functional-409073 kubelet[7889]: I0528 21:07:53.975360    7889 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-409073" podUID="e607f425-416b-4a21-b0af-1eaa2ee6538a"
	May 28 21:08:03 functional-409073 kubelet[7889]: I0528 21:08:03.624685    7889 scope.go:117] "RemoveContainer" containerID="50b35e311bffc8ae4c0eb338a3139b682f471dc8a584ecf17b30b1dd4e800a45"
	May 28 21:08:03 functional-409073 kubelet[7889]: E0528 21:08:03.625448    7889 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(29db9576-4483-4105-900d-513a47525eff)\"" pod="kube-system/storage-provisioner" podUID="29db9576-4483-4105-900d-513a47525eff"
	May 28 21:08:17 functional-409073 kubelet[7889]: I0528 21:08:17.626327    7889 scope.go:117] "RemoveContainer" containerID="50b35e311bffc8ae4c0eb338a3139b682f471dc8a584ecf17b30b1dd4e800a45"
	May 28 21:08:20 functional-409073 kubelet[7889]: I0528 21:08:20.700882    7889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-409073" podStartSLOduration=27.700843202 podStartE2EDuration="27.700843202s" podCreationTimestamp="2024-05-28 21:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-28 21:08:20.700055709 +0000 UTC m=+93.235367832" watchObservedRunningTime="2024-05-28 21:08:20.700843202 +0000 UTC m=+93.236155325"
	
	
	==> storage-provisioner [50b35e311bff] <==
	I0528 21:07:34.733204       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0528 21:07:34.734640       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a07c2a9627f4] <==
	I0528 21:08:17.751029       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 21:08:17.762985       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 21:08:17.763249       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-409073 -n functional-409073
helpers_test.go:261: (dbg) Run:  kubectl --context functional-409073 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (2.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (374.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-292036 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0528 22:01:51.987393 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:01:53.267769 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:01:55.828602 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:01:56.004131 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:02:00.949704 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:02:06.245155 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:02:09.308333 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 22:02:11.190350 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:02:26.725310 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:02:31.671290 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:02:41.886092 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 22:03:02.674740 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:03:03.960017 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 22:03:07.685989 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:03:12.632003 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:03:19.119553 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 22:03:25.289603 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:25.294971 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:25.305233 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:25.325361 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:25.365692 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:25.446871 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:25.607616 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:25.928278 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:26.569232 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:27.850325 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:30.411100 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:31.644270 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 22:03:35.531619 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:03:36.072541 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 22:03:38.625543 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 22:03:39.017087 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/calico-608294/client.crt: no such file or directory
E0528 22:03:45.772438 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:04:03.212354 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:03.217618 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:03.227902 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:03.248209 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:03.288422 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:03.368697 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:03.529130 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:03.849482 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:04.489936 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:05.770293 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:06.253093 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:04:06.701601 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/calico-608294/client.crt: no such file or directory
E0528 22:04:08.331434 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:13.451966 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:23.693082 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:29.606651 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:04:34.552822 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:04:44.173383 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:04:47.213324 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:04:58.043966 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 22:05:18.831256 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:05:25.134183 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:05:25.726532 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 22:05:43.858885 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 22:05:46.515918 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:06:09.134159 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-292036 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: exit status 102 (6m11.967892041s)

                                                
                                                
-- stdout --
	* [old-k8s-version-292036] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-292036" primary control-plane node in "old-k8s-version-292036" cluster
	* Pulling base image v0.0.44-1716228441-18934 ...
	* Restarting existing docker container for "old-k8s-version-292036" ...
	* Preparing Kubernetes v1.20.0 on Docker 26.1.3 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-292036 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 22:01:51.838643 1429762 out.go:291] Setting OutFile to fd 1 ...
	I0528 22:01:51.838817 1429762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:01:51.838846 1429762 out.go:304] Setting ErrFile to fd 2...
	I0528 22:01:51.838866 1429762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:01:51.839114 1429762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	I0528 22:01:51.839510 1429762 out.go:298] Setting JSON to false
	I0528 22:01:51.840685 1429762 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20661,"bootTime":1716913051,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0528 22:01:51.840780 1429762 start.go:139] virtualization:  
	I0528 22:01:51.845429 1429762 out.go:177] * [old-k8s-version-292036] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 22:01:51.847970 1429762 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 22:01:51.848048 1429762 notify.go:220] Checking for updates...
	I0528 22:01:51.853020 1429762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 22:01:51.855502 1429762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 22:01:51.858232 1429762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	I0528 22:01:51.860869 1429762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 22:01:51.863343 1429762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 22:01:51.866480 1429762 config.go:182] Loaded profile config "old-k8s-version-292036": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0528 22:01:51.869585 1429762 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0528 22:01:51.872072 1429762 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 22:01:51.893919 1429762 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 22:01:51.894093 1429762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 22:01:51.968387 1429762 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 22:01:51.957068939 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 22:01:51.968504 1429762 docker.go:295] overlay module found
	I0528 22:01:51.973596 1429762 out.go:177] * Using the docker driver based on existing profile
	I0528 22:01:51.976336 1429762 start.go:297] selected driver: docker
	I0528 22:01:51.976355 1429762 start.go:901] validating driver "docker" against &{Name:old-k8s-version-292036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-292036 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:01:51.976488 1429762 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 22:01:51.977178 1429762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 22:01:52.038294 1429762 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 22:01:52.028951348 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 22:01:52.038672 1429762 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 22:01:52.038729 1429762 cni.go:84] Creating CNI manager for ""
	I0528 22:01:52.038749 1429762 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0528 22:01:52.038792 1429762 start.go:340] cluster config:
	{Name:old-k8s-version-292036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-292036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:01:52.043334 1429762 out.go:177] * Starting "old-k8s-version-292036" primary control-plane node in "old-k8s-version-292036" cluster
	I0528 22:01:52.045878 1429762 cache.go:121] Beginning downloading kic base image for docker with docker
	I0528 22:01:52.048613 1429762 out.go:177] * Pulling base image v0.0.44-1716228441-18934 ...
	I0528 22:01:52.051227 1429762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0528 22:01:52.051307 1429762 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0528 22:01:52.051325 1429762 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 22:01:52.051341 1429762 cache.go:56] Caching tarball of preloaded images
	I0528 22:01:52.051423 1429762 preload.go:173] Found /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0528 22:01:52.051433 1429762 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0528 22:01:52.051552 1429762 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/config.json ...
	I0528 22:01:52.067884 1429762 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon, skipping pull
	I0528 22:01:52.067910 1429762 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 exists in daemon, skipping load
	I0528 22:01:52.067934 1429762 cache.go:194] Successfully downloaded all kic artifacts
	I0528 22:01:52.068021 1429762 start.go:360] acquireMachinesLock for old-k8s-version-292036: {Name:mke34ef6f266c7bcc3bef0b06ec81b012e00982a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:01:52.068171 1429762 start.go:364] duration metric: took 125.092µs to acquireMachinesLock for "old-k8s-version-292036"
	I0528 22:01:52.068198 1429762 start.go:96] Skipping create...Using existing machine configuration
	I0528 22:01:52.068212 1429762 fix.go:54] fixHost starting: 
	I0528 22:01:52.068581 1429762 cli_runner.go:164] Run: docker container inspect old-k8s-version-292036 --format={{.State.Status}}
	I0528 22:01:52.085190 1429762 fix.go:112] recreateIfNeeded on old-k8s-version-292036: state=Stopped err=<nil>
	W0528 22:01:52.085229 1429762 fix.go:138] unexpected machine state, will restart: <nil>
	I0528 22:01:52.088223 1429762 out.go:177] * Restarting existing docker container for "old-k8s-version-292036" ...
	I0528 22:01:52.090714 1429762 cli_runner.go:164] Run: docker start old-k8s-version-292036
	I0528 22:01:52.417899 1429762 cli_runner.go:164] Run: docker container inspect old-k8s-version-292036 --format={{.State.Status}}
	I0528 22:01:52.447381 1429762 kic.go:430] container "old-k8s-version-292036" state is running.
	I0528 22:01:52.448010 1429762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-292036
	I0528 22:01:52.469467 1429762 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/config.json ...
	I0528 22:01:52.469733 1429762 machine.go:94] provisionDockerMachine start ...
	I0528 22:01:52.469819 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:01:52.490292 1429762 main.go:141] libmachine: Using SSH client type: native
	I0528 22:01:52.490608 1429762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34292 <nil> <nil>}
	I0528 22:01:52.490622 1429762 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 22:01:52.491397 1429762 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0528 22:01:55.613717 1429762 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-292036
	
	I0528 22:01:55.613745 1429762 ubuntu.go:169] provisioning hostname "old-k8s-version-292036"
	I0528 22:01:55.613841 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:01:55.633268 1429762 main.go:141] libmachine: Using SSH client type: native
	I0528 22:01:55.633515 1429762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34292 <nil> <nil>}
	I0528 22:01:55.633532 1429762 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-292036 && echo "old-k8s-version-292036" | sudo tee /etc/hostname
	I0528 22:01:55.771007 1429762 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-292036
	
	I0528 22:01:55.771125 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:01:55.788095 1429762 main.go:141] libmachine: Using SSH client type: native
	I0528 22:01:55.788348 1429762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34292 <nil> <nil>}
	I0528 22:01:55.788371 1429762 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-292036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-292036/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-292036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 22:01:55.910105 1429762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 22:01:55.910140 1429762 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18966-1064873/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-1064873/.minikube}
	I0528 22:01:55.910157 1429762 ubuntu.go:177] setting up certificates
	I0528 22:01:55.910166 1429762 provision.go:84] configureAuth start
	I0528 22:01:55.910226 1429762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-292036
	I0528 22:01:55.926830 1429762 provision.go:143] copyHostCerts
	I0528 22:01:55.926906 1429762 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.pem, removing ...
	I0528 22:01:55.926920 1429762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.pem
	I0528 22:01:55.926998 1429762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.pem (1078 bytes)
	I0528 22:01:55.927108 1429762 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1064873/.minikube/cert.pem, removing ...
	I0528 22:01:55.927118 1429762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1064873/.minikube/cert.pem
	I0528 22:01:55.927147 1429762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/cert.pem (1123 bytes)
	I0528 22:01:55.927311 1429762 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1064873/.minikube/key.pem, removing ...
	I0528 22:01:55.927322 1429762 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1064873/.minikube/key.pem
	I0528 22:01:55.927355 1429762 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/key.pem (1679 bytes)
	I0528 22:01:55.927427 1429762 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-292036 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-292036]
	I0528 22:01:56.412603 1429762 provision.go:177] copyRemoteCerts
	I0528 22:01:56.412677 1429762 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 22:01:56.412724 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:01:56.436587 1429762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/old-k8s-version-292036/id_rsa Username:docker}
	I0528 22:01:56.531332 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 22:01:56.556955 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0528 22:01:56.582610 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0528 22:01:56.609629 1429762 provision.go:87] duration metric: took 699.433002ms to configureAuth
	I0528 22:01:56.609658 1429762 ubuntu.go:193] setting minikube options for container-runtime
	I0528 22:01:56.609852 1429762 config.go:182] Loaded profile config "old-k8s-version-292036": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0528 22:01:56.609918 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:01:56.626572 1429762 main.go:141] libmachine: Using SSH client type: native
	I0528 22:01:56.626843 1429762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34292 <nil> <nil>}
	I0528 22:01:56.626858 1429762 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 22:01:56.754924 1429762 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0528 22:01:56.754953 1429762 ubuntu.go:71] root file system type: overlay
	I0528 22:01:56.755073 1429762 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 22:01:56.755146 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:01:56.771856 1429762 main.go:141] libmachine: Using SSH client type: native
	I0528 22:01:56.772100 1429762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34292 <nil> <nil>}
	I0528 22:01:56.772177 1429762 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 22:01:56.916535 1429762 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 22:01:56.916620 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:01:56.933287 1429762 main.go:141] libmachine: Using SSH client type: native
	I0528 22:01:56.933534 1429762 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34292 <nil> <nil>}
	I0528 22:01:56.933551 1429762 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 22:01:57.076910 1429762 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 22:01:57.076937 1429762 machine.go:97] duration metric: took 4.607192954s to provisionDockerMachine
	I0528 22:01:57.076957 1429762 start.go:293] postStartSetup for "old-k8s-version-292036" (driver="docker")
	I0528 22:01:57.076981 1429762 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 22:01:57.077100 1429762 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 22:01:57.077158 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:01:57.096199 1429762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/old-k8s-version-292036/id_rsa Username:docker}
	I0528 22:01:57.187304 1429762 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 22:01:57.190828 1429762 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0528 22:01:57.190863 1429762 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0528 22:01:57.190873 1429762 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0528 22:01:57.190880 1429762 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0528 22:01:57.190891 1429762 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1064873/.minikube/addons for local assets ...
	I0528 22:01:57.190952 1429762 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1064873/.minikube/files for local assets ...
	I0528 22:01:57.191034 1429762 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem -> 10703092.pem in /etc/ssl/certs
	I0528 22:01:57.191139 1429762 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 22:01:57.200499 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem --> /etc/ssl/certs/10703092.pem (1708 bytes)
	I0528 22:01:57.226622 1429762 start.go:296] duration metric: took 149.637174ms for postStartSetup
	I0528 22:01:57.226728 1429762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 22:01:57.226809 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:01:57.243649 1429762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/old-k8s-version-292036/id_rsa Username:docker}
	I0528 22:01:57.331011 1429762 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0528 22:01:57.335811 1429762 fix.go:56] duration metric: took 5.267598343s for fixHost
	I0528 22:01:57.335837 1429762 start.go:83] releasing machines lock for "old-k8s-version-292036", held for 5.267653062s
	I0528 22:01:57.335916 1429762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-292036
	I0528 22:01:57.352557 1429762 ssh_runner.go:195] Run: cat /version.json
	I0528 22:01:57.352638 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:01:57.352919 1429762 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 22:01:57.352984 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:01:57.375543 1429762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/old-k8s-version-292036/id_rsa Username:docker}
	I0528 22:01:57.383627 1429762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/old-k8s-version-292036/id_rsa Username:docker}
	I0528 22:01:57.469955 1429762 ssh_runner.go:195] Run: systemctl --version
	I0528 22:01:57.583093 1429762 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 22:01:57.587619 1429762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0528 22:01:57.608085 1429762 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0528 22:01:57.608175 1429762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0528 22:01:57.626073 1429762 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0528 22:01:57.643619 1429762 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0528 22:01:57.643663 1429762 start.go:494] detecting cgroup driver to use...
	I0528 22:01:57.643699 1429762 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 22:01:57.643826 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 22:01:57.660752 1429762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0528 22:01:57.671536 1429762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 22:01:57.686063 1429762 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 22:01:57.686171 1429762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 22:01:57.696511 1429762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 22:01:57.706696 1429762 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 22:01:57.717260 1429762 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 22:01:57.727836 1429762 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 22:01:57.737394 1429762 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 22:01:57.747890 1429762 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 22:01:57.756689 1429762 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 22:01:57.765991 1429762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:01:57.856585 1429762 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 22:01:57.979910 1429762 start.go:494] detecting cgroup driver to use...
	I0528 22:01:57.979964 1429762 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 22:01:57.980015 1429762 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 22:01:58.007901 1429762 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0528 22:01:58.008025 1429762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 22:01:58.022314 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 22:01:58.044076 1429762 ssh_runner.go:195] Run: which cri-dockerd
	I0528 22:01:58.050909 1429762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 22:01:58.060176 1429762 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 22:01:58.087167 1429762 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 22:01:58.182236 1429762 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 22:01:58.287804 1429762 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 22:01:58.287984 1429762 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 22:01:58.309597 1429762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:01:58.417564 1429762 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 22:01:58.863863 1429762 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 22:01:58.900267 1429762 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 22:01:58.931908 1429762 out.go:204] * Preparing Kubernetes v1.20.0 on Docker 26.1.3 ...
	I0528 22:01:58.932083 1429762 cli_runner.go:164] Run: docker network inspect old-k8s-version-292036 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 22:01:58.955166 1429762 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0528 22:01:58.959621 1429762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:01:58.972205 1429762 kubeadm.go:877] updating cluster {Name:old-k8s-version-292036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-292036 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 22:01:58.972325 1429762 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0528 22:01:58.972377 1429762 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 22:01:58.995393 1429762 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0528 22:01:58.995418 1429762 docker.go:615] Images already preloaded, skipping extraction
	I0528 22:01:58.995483 1429762 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 22:01:59.015176 1429762 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0528 22:01:59.015197 1429762 cache_images.go:84] Images are preloaded, skipping loading
	I0528 22:01:59.015207 1429762 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.20.0 docker true true} ...
	I0528 22:01:59.015335 1429762 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-292036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-292036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 22:01:59.015402 1429762 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0528 22:01:59.076328 1429762 cni.go:84] Creating CNI manager for ""
	I0528 22:01:59.076401 1429762 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0528 22:01:59.076429 1429762 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 22:01:59.076477 1429762 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-292036 NodeName:old-k8s-version-292036 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0528 22:01:59.076662 1429762 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-292036"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 22:01:59.076750 1429762 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0528 22:01:59.086794 1429762 binaries.go:44] Found k8s binaries, skipping transfer
	I0528 22:01:59.086858 1429762 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 22:01:59.097098 1429762 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0528 22:01:59.122767 1429762 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 22:01:59.153605 1429762 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I0528 22:01:59.177515 1429762 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0528 22:01:59.181179 1429762 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:01:59.193023 1429762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:01:59.316215 1429762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:01:59.341660 1429762 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036 for IP: 192.168.85.2
	I0528 22:01:59.341731 1429762 certs.go:194] generating shared ca certs ...
	I0528 22:01:59.341764 1429762 certs.go:226] acquiring lock for ca certs: {Name:mk5cb73d5e2c9c3b65010257baa77ed890ffd0a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:01:59.341946 1429762 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.key
	I0528 22:01:59.342055 1429762 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.key
	I0528 22:01:59.342085 1429762 certs.go:256] generating profile certs ...
	I0528 22:01:59.342210 1429762 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.key
	I0528 22:01:59.342317 1429762 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/apiserver.key.88e3276e
	I0528 22:01:59.342436 1429762 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/proxy-client.key
	I0528 22:01:59.342592 1429762 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/1070309.pem (1338 bytes)
	W0528 22:01:59.342649 1429762 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/1070309_empty.pem, impossibly tiny 0 bytes
	I0528 22:01:59.342676 1429762 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 22:01:59.342733 1429762 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem (1078 bytes)
	I0528 22:01:59.342781 1429762 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem (1123 bytes)
	I0528 22:01:59.342840 1429762 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem (1679 bytes)
	I0528 22:01:59.342912 1429762 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem (1708 bytes)
	I0528 22:01:59.343583 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 22:01:59.419871 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 22:01:59.476147 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 22:01:59.529901 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 22:01:59.562869 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0528 22:01:59.589456 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0528 22:01:59.634350 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 22:01:59.670049 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0528 22:01:59.740597 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/1070309.pem --> /usr/share/ca-certificates/1070309.pem (1338 bytes)
	I0528 22:01:59.775343 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem --> /usr/share/ca-certificates/10703092.pem (1708 bytes)
	I0528 22:01:59.803038 1429762 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 22:01:59.831985 1429762 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 22:01:59.851307 1429762 ssh_runner.go:195] Run: openssl version
	I0528 22:01:59.856823 1429762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1070309.pem && ln -fs /usr/share/ca-certificates/1070309.pem /etc/ssl/certs/1070309.pem"
	I0528 22:01:59.866562 1429762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1070309.pem
	I0528 22:01:59.870145 1429762 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 21:04 /usr/share/ca-certificates/1070309.pem
	I0528 22:01:59.870212 1429762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1070309.pem
	I0528 22:01:59.877067 1429762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1070309.pem /etc/ssl/certs/51391683.0"
	I0528 22:01:59.885995 1429762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10703092.pem && ln -fs /usr/share/ca-certificates/10703092.pem /etc/ssl/certs/10703092.pem"
	I0528 22:01:59.896237 1429762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10703092.pem
	I0528 22:01:59.899813 1429762 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 21:04 /usr/share/ca-certificates/10703092.pem
	I0528 22:01:59.899880 1429762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10703092.pem
	I0528 22:01:59.907129 1429762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10703092.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 22:01:59.916306 1429762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 22:01:59.926157 1429762 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:01:59.930049 1429762 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:57 /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:01:59.930119 1429762 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:01:59.937587 1429762 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 22:01:59.949965 1429762 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 22:01:59.953815 1429762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0528 22:01:59.960830 1429762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0528 22:01:59.967846 1429762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0528 22:01:59.974733 1429762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0528 22:01:59.981684 1429762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0528 22:01:59.988572 1429762 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0528 22:01:59.996016 1429762 kubeadm.go:391] StartCluster: {Name:old-k8s-version-292036 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-292036 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:01:59.996184 1429762 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 22:02:00.016743 1429762 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0528 22:02:00.028712 1429762 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0528 22:02:00.028733 1429762 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0528 22:02:00.028740 1429762 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0528 22:02:00.028803 1429762 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0528 22:02:00.067192 1429762 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0528 22:02:00.068121 1429762 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-292036" does not appear in /home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 22:02:00.068695 1429762 kubeconfig.go:62] /home/jenkins/minikube-integration/18966-1064873/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-292036" cluster setting kubeconfig missing "old-k8s-version-292036" context setting]
	I0528 22:02:00.069532 1429762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/kubeconfig: {Name:mk43b4b38c110ff2ffbd3a6de61be9ad6b977a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:02:00.073644 1429762 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0528 22:02:00.099270 1429762 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0528 22:02:00.099320 1429762 kubeadm.go:591] duration metric: took 70.571452ms to restartPrimaryControlPlane
	I0528 22:02:00.099331 1429762 kubeadm.go:393] duration metric: took 103.3242ms to StartCluster
	I0528 22:02:00.099350 1429762 settings.go:142] acquiring lock: {Name:mk9dd4e0f1e49f25e638e0ae0a582e344ec1255d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:02:00.099418 1429762 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 22:02:00.101025 1429762 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/kubeconfig: {Name:mk43b4b38c110ff2ffbd3a6de61be9ad6b977a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:02:00.101310 1429762 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 22:02:00.113693 1429762 out.go:177] * Verifying Kubernetes components...
	I0528 22:02:00.102199 1429762 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 22:02:00.102465 1429762 config.go:182] Loaded profile config "old-k8s-version-292036": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0528 22:02:00.117117 1429762 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:02:00.117288 1429762 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-292036"
	I0528 22:02:00.117314 1429762 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-292036"
	W0528 22:02:00.117346 1429762 addons.go:243] addon storage-provisioner should already be in state true
	I0528 22:02:00.117391 1429762 host.go:66] Checking if "old-k8s-version-292036" exists ...
	I0528 22:02:00.118095 1429762 cli_runner.go:164] Run: docker container inspect old-k8s-version-292036 --format={{.State.Status}}
	I0528 22:02:00.118356 1429762 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-292036"
	I0528 22:02:00.118473 1429762 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-292036"
	I0528 22:02:00.119097 1429762 cli_runner.go:164] Run: docker container inspect old-k8s-version-292036 --format={{.State.Status}}
	I0528 22:02:00.119744 1429762 addons.go:69] Setting dashboard=true in profile "old-k8s-version-292036"
	I0528 22:02:00.119788 1429762 addons.go:234] Setting addon dashboard=true in "old-k8s-version-292036"
	W0528 22:02:00.119805 1429762 addons.go:243] addon dashboard should already be in state true
	I0528 22:02:00.119843 1429762 host.go:66] Checking if "old-k8s-version-292036" exists ...
	I0528 22:02:00.120277 1429762 cli_runner.go:164] Run: docker container inspect old-k8s-version-292036 --format={{.State.Status}}
	I0528 22:02:00.133667 1429762 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-292036"
	I0528 22:02:00.133735 1429762 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-292036"
	W0528 22:02:00.133748 1429762 addons.go:243] addon metrics-server should already be in state true
	I0528 22:02:00.133800 1429762 host.go:66] Checking if "old-k8s-version-292036" exists ...
	I0528 22:02:00.134339 1429762 cli_runner.go:164] Run: docker container inspect old-k8s-version-292036 --format={{.State.Status}}
	I0528 22:02:00.267677 1429762 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-292036"
	W0528 22:02:00.267707 1429762 addons.go:243] addon default-storageclass should already be in state true
	I0528 22:02:00.267737 1429762 host.go:66] Checking if "old-k8s-version-292036" exists ...
	I0528 22:02:00.268173 1429762 cli_runner.go:164] Run: docker container inspect old-k8s-version-292036 --format={{.State.Status}}
	I0528 22:02:00.302147 1429762 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0528 22:02:00.305458 1429762 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0528 22:02:00.308056 1429762 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0528 22:02:00.308082 1429762 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0528 22:02:00.308189 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:02:00.346600 1429762 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:02:00.349577 1429762 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:02:00.349611 1429762 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 22:02:00.349722 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:02:00.363083 1429762 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0528 22:02:00.366358 1429762 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0528 22:02:00.366386 1429762 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0528 22:02:00.366466 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:02:00.374458 1429762 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 22:02:00.374486 1429762 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 22:02:00.374564 1429762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-292036
	I0528 22:02:00.427985 1429762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/old-k8s-version-292036/id_rsa Username:docker}
	I0528 22:02:00.447690 1429762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/old-k8s-version-292036/id_rsa Username:docker}
	I0528 22:02:00.461202 1429762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/old-k8s-version-292036/id_rsa Username:docker}
	I0528 22:02:00.463823 1429762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34292 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/old-k8s-version-292036/id_rsa Username:docker}
	I0528 22:02:00.507026 1429762 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:02:00.569272 1429762 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-292036" to be "Ready" ...
	I0528 22:02:00.651986 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:02:00.680152 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:02:00.686797 1429762 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0528 22:02:00.686865 1429762 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0528 22:02:00.695691 1429762 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0528 22:02:00.695765 1429762 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0528 22:02:00.773875 1429762 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0528 22:02:00.773915 1429762 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0528 22:02:00.777812 1429762 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0528 22:02:00.777837 1429762 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0528 22:02:00.859629 1429762 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:02:00.859657 1429762 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0528 22:02:00.900421 1429762 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0528 22:02:00.900449 1429762 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0528 22:02:00.958836 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:02:00.974386 1429762 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0528 22:02:00.974464 1429762 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0528 22:02:00.981201 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:00.981284 1429762 retry.go:31] will retry after 367.77541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.047415 1429762 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0528 22:02:01.047482 1429762 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0528 22:02:01.056872 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.056953 1429762 retry.go:31] will retry after 292.79716ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.082709 1429762 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0528 22:02:01.082737 1429762 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0528 22:02:01.116896 1429762 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0528 22:02:01.116923 1429762 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0528 22:02:01.146692 1429762 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0528 22:02:01.146719 1429762 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0528 22:02:01.167355 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.167388 1429762 retry.go:31] will retry after 235.71074ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.182962 1429762 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:02:01.182992 1429762 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0528 22:02:01.225855 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0528 22:02:01.312328 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.312360 1429762 retry.go:31] will retry after 209.176998ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.349543 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:02:01.349994 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:02:01.403550 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0528 22:02:01.479875 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.479949 1429762 retry.go:31] will retry after 284.13412ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:02:01.480020 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.480052 1429762 retry.go:31] will retry after 276.12288ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.522342 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0528 22:02:01.541276 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.541365 1429762 retry.go:31] will retry after 540.886333ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:02:01.623786 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.623858 1429762 retry.go:31] will retry after 409.480927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.757107 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:02:01.764474 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 22:02:01.861280 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.861322 1429762 retry.go:31] will retry after 640.888499ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:02:01.878518 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:01.878549 1429762 retry.go:31] will retry after 690.612703ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:02.033834 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:02:02.083168 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0528 22:02:02.199274 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:02.199323 1429762 retry.go:31] will retry after 463.521501ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:02:02.282749 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:02.282881 1429762 retry.go:31] will retry after 769.856834ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:02.503234 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:02:02.569945 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:02:02.570131 1429762 node_ready.go:53] error getting node "old-k8s-version-292036": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-292036": dial tcp 192.168.85.2:8443: connect: connection refused
	W0528 22:02:02.588070 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:02.588113 1429762 retry.go:31] will retry after 1.028273687s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:02.663391 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0528 22:02:02.707751 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:02.707785 1429762 retry.go:31] will retry after 1.028054644s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:02:02.795541 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:02.795594 1429762 retry.go:31] will retry after 519.645877ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:03.053041 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0528 22:02:03.251747 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:03.251802 1429762 retry.go:31] will retry after 1.212157546s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:03.316187 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0528 22:02:03.480443 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:03.480505 1429762 retry.go:31] will retry after 1.563452985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:03.617182 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:02:03.736392 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0528 22:02:03.812194 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:03.812226 1429762 retry.go:31] will retry after 1.765646884s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0528 22:02:04.017303 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:04.017339 1429762 retry.go:31] will retry after 1.40116648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:04.464268 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0528 22:02:04.795054 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:04.795126 1429762 retry.go:31] will retry after 665.175507ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:05.044613 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:02:05.070354 1429762 node_ready.go:53] error getting node "old-k8s-version-292036": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-292036": dial tcp 192.168.85.2:8443: connect: connection refused
	W0528 22:02:05.354783 1429762 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:05.354859 1429762 retry.go:31] will retry after 2.501390204s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0528 22:02:05.419195 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:02:05.460746 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0528 22:02:05.578167 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:02:07.856564 1429762 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0528 22:02:15.659807 1429762 node_ready.go:49] node "old-k8s-version-292036" has status "Ready":"True"
	I0528 22:02:15.659890 1429762 node_ready.go:38] duration metric: took 15.090541557s for node "old-k8s-version-292036" to be "Ready" ...
	I0528 22:02:15.659916 1429762 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:02:15.830968 1429762 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-xpq7c" in "kube-system" namespace to be "Ready" ...
	I0528 22:02:15.897825 1429762 pod_ready.go:92] pod "coredns-74ff55c5b-xpq7c" in "kube-system" namespace has status "Ready":"True"
	I0528 22:02:15.897858 1429762 pod_ready.go:81] duration metric: took 66.84172ms for pod "coredns-74ff55c5b-xpq7c" in "kube-system" namespace to be "Ready" ...
	I0528 22:02:15.897870 1429762 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-292036" in "kube-system" namespace to be "Ready" ...
	I0528 22:02:15.943253 1429762 pod_ready.go:92] pod "etcd-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"True"
	I0528 22:02:15.943280 1429762 pod_ready.go:81] duration metric: took 45.379835ms for pod "etcd-old-k8s-version-292036" in "kube-system" namespace to be "Ready" ...
	I0528 22:02:15.943291 1429762 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-292036" in "kube-system" namespace to be "Ready" ...
	I0528 22:02:16.015473 1429762 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"True"
	I0528 22:02:16.015502 1429762 pod_ready.go:81] duration metric: took 72.202271ms for pod "kube-apiserver-old-k8s-version-292036" in "kube-system" namespace to be "Ready" ...
	I0528 22:02:16.015522 1429762 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace to be "Ready" ...
	I0528 22:02:17.536440 1429762 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.117138584s)
	I0528 22:02:17.537007 1429762 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.076175983s)
	I0528 22:02:17.537038 1429762 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-292036"
	I0528 22:02:17.537101 1429762 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.958859426s)
	I0528 22:02:17.685387 1429762 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.828770783s)
	I0528 22:02:17.688866 1429762 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-292036 addons enable metrics-server
	
	I0528 22:02:17.691447 1429762 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0528 22:02:17.694384 1429762 addons.go:510] duration metric: took 17.592183745s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0528 22:02:18.023294 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:20.521389 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:22.522434 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:24.522723 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:27.034369 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:29.522218 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:32.022309 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:34.521428 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:36.523198 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:38.523734 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:40.525057 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:43.022265 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:45.036336 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:47.521983 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:50.023144 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:52.023218 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:54.522968 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:57.027594 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:02:59.521778 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:01.522193 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:04.022641 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:06.522103 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:09.022545 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:11.521712 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:13.522593 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:16.022996 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:18.025185 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:20.523268 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:23.022589 1429762 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:23.521522 1429762 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"True"
	I0528 22:03:23.521544 1429762 pod_ready.go:81] duration metric: took 1m7.506013601s for pod "kube-controller-manager-old-k8s-version-292036" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:23.521556 1429762 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cnv4j" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:23.526893 1429762 pod_ready.go:92] pod "kube-proxy-cnv4j" in "kube-system" namespace has status "Ready":"True"
	I0528 22:03:23.526920 1429762 pod_ready.go:81] duration metric: took 5.35661ms for pod "kube-proxy-cnv4j" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:23.526932 1429762 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-292036" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:25.533304 1429762 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:27.533735 1429762 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:30.037230 1429762 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:32.533316 1429762 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:35.036743 1429762 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:37.532501 1429762 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:40.034528 1429762 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:41.533055 1429762 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-292036" in "kube-system" namespace has status "Ready":"True"
	I0528 22:03:41.533090 1429762 pod_ready.go:81] duration metric: took 18.006145408s for pod "kube-scheduler-old-k8s-version-292036" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:41.533103 1429762 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace to be "Ready" ...
	I0528 22:03:43.539764 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:45.539858 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:47.540505 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:50.039807 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:52.047155 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:54.538934 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:56.539368 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:03:59.039307 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:01.040382 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:03.040733 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:05.540668 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:08.039716 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:10.040655 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:12.538881 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:14.539713 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:17.041111 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:19.539252 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:21.539863 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:23.549672 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:26.039585 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:28.039870 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:30.044868 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:32.539477 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:34.539557 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:36.541105 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:39.040041 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:41.538965 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:43.539257 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:45.539349 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:47.539700 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:49.539790 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:52.039750 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:54.040494 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:56.539112 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:04:58.539620 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:00.551933 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:03.040297 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:05.040768 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:07.538947 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:10.039521 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:12.040256 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:14.540297 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:17.039590 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:19.039857 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:21.040381 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:23.540012 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:26.039988 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:28.539390 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:31.039349 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:33.042585 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:35.538872 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:37.538962 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:39.540598 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:42.041120 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:44.542535 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:47.039963 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:49.539451 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:52.039301 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:54.041253 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:56.539586 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:05:58.541720 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:00.543434 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:03.040389 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:05.539975 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:08.039987 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:10.060571 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:12.538437 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:14.545515 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:17.041373 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:19.538762 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:21.539552 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:24.040032 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:26.040302 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:28.539199 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:30.539507 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:33.041641 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:35.540149 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:38.041650 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:40.046898 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:42.055586 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:44.540006 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:47.039109 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:49.039182 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:51.042058 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:53.539478 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:55.540506 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:58.039455 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:00.096705 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:02.540370 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:04.540449 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:07.041287 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:09.045385 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:11.539903 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:14.043307 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:16.540166 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:19.040005 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:21.041417 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:23.539265 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:25.539509 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:27.552261 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:30.040734 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:32.540231 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:35.041370 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:37.043908 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:39.540662 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:41.539183 1429762 pod_ready.go:81] duration metric: took 4m0.006054794s for pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace to be "Ready" ...
	E0528 22:07:41.539211 1429762 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0528 22:07:41.539221 1429762 pod_ready.go:38] duration metric: took 5m25.879281664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:07:41.539241 1429762 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:07:41.539325 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0528 22:07:41.581396 1429762 logs.go:276] 2 containers: [6f5dbe5b1578 2ca82d3e185e]
	I0528 22:07:41.581485 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0528 22:07:41.612667 1429762 logs.go:276] 2 containers: [ddf1864687ea b3da6daaeceb]
	I0528 22:07:41.612747 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0528 22:07:41.637760 1429762 logs.go:276] 2 containers: [11f08e40d7a4 adc72a271675]
	I0528 22:07:41.637844 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0528 22:07:41.654814 1429762 logs.go:276] 2 containers: [f10e2010fb5d 08f05bfb7e4a]
	I0528 22:07:41.654896 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0528 22:07:41.672603 1429762 logs.go:276] 2 containers: [7072b62ac073 2827d087d0f0]
	I0528 22:07:41.672693 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0528 22:07:41.690423 1429762 logs.go:276] 2 containers: [7f2f40603d14 81b8f1bbcbaa]
	I0528 22:07:41.690517 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0528 22:07:41.706113 1429762 logs.go:276] 0 containers: []
	W0528 22:07:41.706136 1429762 logs.go:278] No container was found matching "kindnet"
	I0528 22:07:41.706193 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0528 22:07:41.725404 1429762 logs.go:276] 1 containers: [34ca6395814f]
	I0528 22:07:41.725530 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0528 22:07:41.751424 1429762 logs.go:276] 2 containers: [62900a727b01 ba1d077e39b4]
	I0528 22:07:41.751458 1429762 logs.go:123] Gathering logs for coredns [adc72a271675] ...
	I0528 22:07:41.751470 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adc72a271675"
	I0528 22:07:41.777601 1429762 logs.go:123] Gathering logs for kube-proxy [2827d087d0f0] ...
	I0528 22:07:41.777630 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2827d087d0f0"
	I0528 22:07:41.801398 1429762 logs.go:123] Gathering logs for container status ...
	I0528 22:07:41.801426 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 22:07:41.867668 1429762 logs.go:123] Gathering logs for describe nodes ...
	I0528 22:07:41.867701 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 22:07:42.032190 1429762 logs.go:123] Gathering logs for kube-apiserver [2ca82d3e185e] ...
	I0528 22:07:42.032220 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ca82d3e185e"
	I0528 22:07:42.101067 1429762 logs.go:123] Gathering logs for etcd [ddf1864687ea] ...
	I0528 22:07:42.101110 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf1864687ea"
	I0528 22:07:42.138547 1429762 logs.go:123] Gathering logs for kube-proxy [7072b62ac073] ...
	I0528 22:07:42.138580 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7072b62ac073"
	I0528 22:07:42.165445 1429762 logs.go:123] Gathering logs for kube-controller-manager [81b8f1bbcbaa] ...
	I0528 22:07:42.165480 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b8f1bbcbaa"
	I0528 22:07:42.213800 1429762 logs.go:123] Gathering logs for kubernetes-dashboard [34ca6395814f] ...
	I0528 22:07:42.213839 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ca6395814f"
	I0528 22:07:42.241229 1429762 logs.go:123] Gathering logs for storage-provisioner [62900a727b01] ...
	I0528 22:07:42.241261 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62900a727b01"
	I0528 22:07:42.278401 1429762 logs.go:123] Gathering logs for kube-apiserver [6f5dbe5b1578] ...
	I0528 22:07:42.278434 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f5dbe5b1578"
	I0528 22:07:42.324852 1429762 logs.go:123] Gathering logs for etcd [b3da6daaeceb] ...
	I0528 22:07:42.324888 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3da6daaeceb"
	I0528 22:07:42.355992 1429762 logs.go:123] Gathering logs for kube-scheduler [08f05bfb7e4a] ...
	I0528 22:07:42.356026 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08f05bfb7e4a"
	I0528 22:07:42.396234 1429762 logs.go:123] Gathering logs for storage-provisioner [ba1d077e39b4] ...
	I0528 22:07:42.396265 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1d077e39b4"
	I0528 22:07:42.423594 1429762 logs.go:123] Gathering logs for Docker ...
	I0528 22:07:42.423620 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0528 22:07:42.452281 1429762 logs.go:123] Gathering logs for coredns [11f08e40d7a4] ...
	I0528 22:07:42.452313 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f08e40d7a4"
	I0528 22:07:42.473796 1429762 logs.go:123] Gathering logs for kube-scheduler [f10e2010fb5d] ...
	I0528 22:07:42.473829 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f10e2010fb5d"
	I0528 22:07:42.498605 1429762 logs.go:123] Gathering logs for kube-controller-manager [7f2f40603d14] ...
	I0528 22:07:42.498638 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2f40603d14"
	I0528 22:07:42.558280 1429762 logs.go:123] Gathering logs for kubelet ...
	I0528 22:07:42.558353 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 22:07:42.621988 1429762 logs.go:138] Found kubelet problem: May 28 22:02:15 old-k8s-version-292036 kubelet[1208]: E0528 22:02:15.566487    1208 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-292036" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-292036' and this object
	W0528 22:07:42.622280 1429762 logs.go:138] Found kubelet problem: May 28 22:02:15 old-k8s-version-292036 kubelet[1208]: E0528 22:02:15.568083    1208 reflector.go:138] object-"kube-system"/"coredns-token-8brxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-8brxc" is forbidden: User "system:node:old-k8s-version-292036" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-292036' and this object
	W0528 22:07:42.628522 1429762 logs.go:138] Found kubelet problem: May 28 22:02:17 old-k8s-version-292036 kubelet[1208]: E0528 22:02:17.280439    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:42.629620 1429762 logs.go:138] Found kubelet problem: May 28 22:02:18 old-k8s-version-292036 kubelet[1208]: E0528 22:02:18.011832    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.630319 1429762 logs.go:138] Found kubelet problem: May 28 22:02:19 old-k8s-version-292036 kubelet[1208]: E0528 22:02:19.075777    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.633832 1429762 logs.go:138] Found kubelet problem: May 28 22:02:33 old-k8s-version-292036 kubelet[1208]: E0528 22:02:33.317495    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:42.638383 1429762 logs.go:138] Found kubelet problem: May 28 22:02:39 old-k8s-version-292036 kubelet[1208]: E0528 22:02:39.084652    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:42.638876 1429762 logs.go:138] Found kubelet problem: May 28 22:02:39 old-k8s-version-292036 kubelet[1208]: E0528 22:02:39.357029    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.639474 1429762 logs.go:138] Found kubelet problem: May 28 22:02:47 old-k8s-version-292036 kubelet[1208]: E0528 22:02:47.254773    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.639953 1429762 logs.go:138] Found kubelet problem: May 28 22:02:48 old-k8s-version-292036 kubelet[1208]: E0528 22:02:48.433142    1208 pod_workers.go:191] Error syncing pod a526d13b-5979-4e3b-9a89-a95a00b5e5ee ("storage-provisioner_kube-system(a526d13b-5979-4e3b-9a89-a95a00b5e5ee)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a526d13b-5979-4e3b-9a89-a95a00b5e5ee)"
	W0528 22:07:42.642392 1429762 logs.go:138] Found kubelet problem: May 28 22:02:50 old-k8s-version-292036 kubelet[1208]: E0528 22:02:50.715710    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:42.645243 1429762 logs.go:138] Found kubelet problem: May 28 22:03:00 old-k8s-version-292036 kubelet[1208]: E0528 22:03:00.347065    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:42.645612 1429762 logs.go:138] Found kubelet problem: May 28 22:03:05 old-k8s-version-292036 kubelet[1208]: E0528 22:03:05.254675    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.645810 1429762 logs.go:138] Found kubelet problem: May 28 22:03:13 old-k8s-version-292036 kubelet[1208]: E0528 22:03:13.255557    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.648276 1429762 logs.go:138] Found kubelet problem: May 28 22:03:20 old-k8s-version-292036 kubelet[1208]: E0528 22:03:20.709611    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:42.648485 1429762 logs.go:138] Found kubelet problem: May 28 22:03:28 old-k8s-version-292036 kubelet[1208]: E0528 22:03:28.256445    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.648695 1429762 logs.go:138] Found kubelet problem: May 28 22:03:34 old-k8s-version-292036 kubelet[1208]: E0528 22:03:34.263336    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.650902 1429762 logs.go:138] Found kubelet problem: May 28 22:03:42 old-k8s-version-292036 kubelet[1208]: E0528 22:03:42.283007    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:42.651125 1429762 logs.go:138] Found kubelet problem: May 28 22:03:47 old-k8s-version-292036 kubelet[1208]: E0528 22:03:47.255783    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.651318 1429762 logs.go:138] Found kubelet problem: May 28 22:03:53 old-k8s-version-292036 kubelet[1208]: E0528 22:03:53.254587    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.651527 1429762 logs.go:138] Found kubelet problem: May 28 22:03:58 old-k8s-version-292036 kubelet[1208]: E0528 22:03:58.254212    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.651720 1429762 logs.go:138] Found kubelet problem: May 28 22:04:04 old-k8s-version-292036 kubelet[1208]: E0528 22:04:04.254475    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.654056 1429762 logs.go:138] Found kubelet problem: May 28 22:04:10 old-k8s-version-292036 kubelet[1208]: E0528 22:04:10.693145    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:42.654252 1429762 logs.go:138] Found kubelet problem: May 28 22:04:19 old-k8s-version-292036 kubelet[1208]: E0528 22:04:19.254270    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.654456 1429762 logs.go:138] Found kubelet problem: May 28 22:04:24 old-k8s-version-292036 kubelet[1208]: E0528 22:04:24.254178    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.654646 1429762 logs.go:138] Found kubelet problem: May 28 22:04:31 old-k8s-version-292036 kubelet[1208]: E0528 22:04:31.254447    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.654849 1429762 logs.go:138] Found kubelet problem: May 28 22:04:36 old-k8s-version-292036 kubelet[1208]: E0528 22:04:36.263663    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.655056 1429762 logs.go:138] Found kubelet problem: May 28 22:04:45 old-k8s-version-292036 kubelet[1208]: E0528 22:04:45.254653    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.655260 1429762 logs.go:138] Found kubelet problem: May 28 22:04:50 old-k8s-version-292036 kubelet[1208]: E0528 22:04:50.254303    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.655466 1429762 logs.go:138] Found kubelet problem: May 28 22:04:58 old-k8s-version-292036 kubelet[1208]: E0528 22:04:58.254565    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.655683 1429762 logs.go:138] Found kubelet problem: May 28 22:05:01 old-k8s-version-292036 kubelet[1208]: E0528 22:05:01.255269    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.658070 1429762 logs.go:138] Found kubelet problem: May 28 22:05:12 old-k8s-version-292036 kubelet[1208]: E0528 22:05:12.271324    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:42.658316 1429762 logs.go:138] Found kubelet problem: May 28 22:05:16 old-k8s-version-292036 kubelet[1208]: E0528 22:05:16.254290    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.658517 1429762 logs.go:138] Found kubelet problem: May 28 22:05:25 old-k8s-version-292036 kubelet[1208]: E0528 22:05:25.260346    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.658725 1429762 logs.go:138] Found kubelet problem: May 28 22:05:30 old-k8s-version-292036 kubelet[1208]: E0528 22:05:30.254264    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.658914 1429762 logs.go:138] Found kubelet problem: May 28 22:05:38 old-k8s-version-292036 kubelet[1208]: E0528 22:05:38.258801    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.661520 1429762 logs.go:138] Found kubelet problem: May 28 22:05:42 old-k8s-version-292036 kubelet[1208]: E0528 22:05:42.722092    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:42.661736 1429762 logs.go:138] Found kubelet problem: May 28 22:05:50 old-k8s-version-292036 kubelet[1208]: E0528 22:05:50.254810    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.661967 1429762 logs.go:138] Found kubelet problem: May 28 22:05:56 old-k8s-version-292036 kubelet[1208]: E0528 22:05:56.262695    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.662179 1429762 logs.go:138] Found kubelet problem: May 28 22:06:01 old-k8s-version-292036 kubelet[1208]: E0528 22:06:01.254241    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.662413 1429762 logs.go:138] Found kubelet problem: May 28 22:06:09 old-k8s-version-292036 kubelet[1208]: E0528 22:06:09.281534    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.662627 1429762 logs.go:138] Found kubelet problem: May 28 22:06:15 old-k8s-version-292036 kubelet[1208]: E0528 22:06:15.254396    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.662862 1429762 logs.go:138] Found kubelet problem: May 28 22:06:21 old-k8s-version-292036 kubelet[1208]: E0528 22:06:21.257196    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.663061 1429762 logs.go:138] Found kubelet problem: May 28 22:06:30 old-k8s-version-292036 kubelet[1208]: E0528 22:06:30.254217    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.663285 1429762 logs.go:138] Found kubelet problem: May 28 22:06:36 old-k8s-version-292036 kubelet[1208]: E0528 22:06:36.254429    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.663488 1429762 logs.go:138] Found kubelet problem: May 28 22:06:42 old-k8s-version-292036 kubelet[1208]: E0528 22:06:42.254602    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.663709 1429762 logs.go:138] Found kubelet problem: May 28 22:06:50 old-k8s-version-292036 kubelet[1208]: E0528 22:06:50.262748    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.663907 1429762 logs.go:138] Found kubelet problem: May 28 22:06:55 old-k8s-version-292036 kubelet[1208]: E0528 22:06:55.258724    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.664118 1429762 logs.go:138] Found kubelet problem: May 28 22:07:03 old-k8s-version-292036 kubelet[1208]: E0528 22:07:03.262974    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.664352 1429762 logs.go:138] Found kubelet problem: May 28 22:07:10 old-k8s-version-292036 kubelet[1208]: E0528 22:07:10.255540    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.664572 1429762 logs.go:138] Found kubelet problem: May 28 22:07:16 old-k8s-version-292036 kubelet[1208]: E0528 22:07:16.254105    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.664857 1429762 logs.go:138] Found kubelet problem: May 28 22:07:24 old-k8s-version-292036 kubelet[1208]: E0528 22:07:24.254562    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.665090 1429762 logs.go:138] Found kubelet problem: May 28 22:07:27 old-k8s-version-292036 kubelet[1208]: E0528 22:07:27.254400    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.665294 1429762 logs.go:138] Found kubelet problem: May 28 22:07:35 old-k8s-version-292036 kubelet[1208]: E0528 22:07:35.254891    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.665512 1429762 logs.go:138] Found kubelet problem: May 28 22:07:38 old-k8s-version-292036 kubelet[1208]: E0528 22:07:38.254217    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0528 22:07:42.665524 1429762 logs.go:123] Gathering logs for dmesg ...
	I0528 22:07:42.665542 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 22:07:42.688308 1429762 out.go:304] Setting ErrFile to fd 2...
	I0528 22:07:42.688466 1429762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 22:07:42.688535 1429762 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0528 22:07:42.688581 1429762 out.go:239]   May 28 22:07:16 old-k8s-version-292036 kubelet[1208]: E0528 22:07:16.254105    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 28 22:07:16 old-k8s-version-292036 kubelet[1208]: E0528 22:07:16.254105    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.688622 1429762 out.go:239]   May 28 22:07:24 old-k8s-version-292036 kubelet[1208]: E0528 22:07:24.254562    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 28 22:07:24 old-k8s-version-292036 kubelet[1208]: E0528 22:07:24.254562    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.688675 1429762 out.go:239]   May 28 22:07:27 old-k8s-version-292036 kubelet[1208]: E0528 22:07:27.254400    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 28 22:07:27 old-k8s-version-292036 kubelet[1208]: E0528 22:07:27.254400    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.688733 1429762 out.go:239]   May 28 22:07:35 old-k8s-version-292036 kubelet[1208]: E0528 22:07:35.254891    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 28 22:07:35 old-k8s-version-292036 kubelet[1208]: E0528 22:07:35.254891    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.688747 1429762 out.go:239]   May 28 22:07:38 old-k8s-version-292036 kubelet[1208]: E0528 22:07:38.254217    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 28 22:07:38 old-k8s-version-292036 kubelet[1208]: E0528 22:07:38.254217    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0528 22:07:42.688754 1429762 out.go:304] Setting ErrFile to fd 2...
	I0528 22:07:42.688761 1429762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:07:52.690625 1429762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:07:52.703193 1429762 api_server.go:72] duration metric: took 5m52.601850338s to wait for apiserver process to appear ...
	I0528 22:07:52.703221 1429762 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:07:52.703303 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0528 22:07:52.721095 1429762 logs.go:276] 2 containers: [6f5dbe5b1578 2ca82d3e185e]
	I0528 22:07:52.721183 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0528 22:07:52.737993 1429762 logs.go:276] 2 containers: [ddf1864687ea b3da6daaeceb]
	I0528 22:07:52.738130 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0528 22:07:52.754549 1429762 logs.go:276] 2 containers: [11f08e40d7a4 adc72a271675]
	I0528 22:07:52.754636 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0528 22:07:52.771900 1429762 logs.go:276] 2 containers: [f10e2010fb5d 08f05bfb7e4a]
	I0528 22:07:52.771982 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0528 22:07:52.788226 1429762 logs.go:276] 2 containers: [7072b62ac073 2827d087d0f0]
	I0528 22:07:52.788301 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0528 22:07:52.805149 1429762 logs.go:276] 2 containers: [7f2f40603d14 81b8f1bbcbaa]
	I0528 22:07:52.805243 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0528 22:07:52.821459 1429762 logs.go:276] 0 containers: []
	W0528 22:07:52.821486 1429762 logs.go:278] No container was found matching "kindnet"
	I0528 22:07:52.821554 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0528 22:07:52.838787 1429762 logs.go:276] 1 containers: [34ca6395814f]
	I0528 22:07:52.838864 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0528 22:07:52.867219 1429762 logs.go:276] 2 containers: [62900a727b01 ba1d077e39b4]
	I0528 22:07:52.867291 1429762 logs.go:123] Gathering logs for kube-proxy [2827d087d0f0] ...
	I0528 22:07:52.867319 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2827d087d0f0"
	I0528 22:07:52.889298 1429762 logs.go:123] Gathering logs for kube-controller-manager [81b8f1bbcbaa] ...
	I0528 22:07:52.889329 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b8f1bbcbaa"
	I0528 22:07:52.936053 1429762 logs.go:123] Gathering logs for kube-scheduler [08f05bfb7e4a] ...
	I0528 22:07:52.936097 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08f05bfb7e4a"
	I0528 22:07:52.970757 1429762 logs.go:123] Gathering logs for kubelet ...
	I0528 22:07:52.970790 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 22:07:53.030415 1429762 logs.go:138] Found kubelet problem: May 28 22:02:15 old-k8s-version-292036 kubelet[1208]: E0528 22:02:15.566487    1208 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-292036" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-292036' and this object
	W0528 22:07:53.030649 1429762 logs.go:138] Found kubelet problem: May 28 22:02:15 old-k8s-version-292036 kubelet[1208]: E0528 22:02:15.568083    1208 reflector.go:138] object-"kube-system"/"coredns-token-8brxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-8brxc" is forbidden: User "system:node:old-k8s-version-292036" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-292036' and this object
	W0528 22:07:53.036796 1429762 logs.go:138] Found kubelet problem: May 28 22:02:17 old-k8s-version-292036 kubelet[1208]: E0528 22:02:17.280439    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:53.037876 1429762 logs.go:138] Found kubelet problem: May 28 22:02:18 old-k8s-version-292036 kubelet[1208]: E0528 22:02:18.011832    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.038791 1429762 logs.go:138] Found kubelet problem: May 28 22:02:19 old-k8s-version-292036 kubelet[1208]: E0528 22:02:19.075777    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.041766 1429762 logs.go:138] Found kubelet problem: May 28 22:02:33 old-k8s-version-292036 kubelet[1208]: E0528 22:02:33.317495    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:53.046133 1429762 logs.go:138] Found kubelet problem: May 28 22:02:39 old-k8s-version-292036 kubelet[1208]: E0528 22:02:39.084652    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:53.046520 1429762 logs.go:138] Found kubelet problem: May 28 22:02:39 old-k8s-version-292036 kubelet[1208]: E0528 22:02:39.357029    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.047045 1429762 logs.go:138] Found kubelet problem: May 28 22:02:47 old-k8s-version-292036 kubelet[1208]: E0528 22:02:47.254773    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.047491 1429762 logs.go:138] Found kubelet problem: May 28 22:02:48 old-k8s-version-292036 kubelet[1208]: E0528 22:02:48.433142    1208 pod_workers.go:191] Error syncing pod a526d13b-5979-4e3b-9a89-a95a00b5e5ee ("storage-provisioner_kube-system(a526d13b-5979-4e3b-9a89-a95a00b5e5ee)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a526d13b-5979-4e3b-9a89-a95a00b5e5ee)"
	W0528 22:07:53.049738 1429762 logs.go:138] Found kubelet problem: May 28 22:02:50 old-k8s-version-292036 kubelet[1208]: E0528 22:02:50.715710    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:53.052288 1429762 logs.go:138] Found kubelet problem: May 28 22:03:00 old-k8s-version-292036 kubelet[1208]: E0528 22:03:00.347065    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:53.052629 1429762 logs.go:138] Found kubelet problem: May 28 22:03:05 old-k8s-version-292036 kubelet[1208]: E0528 22:03:05.254675    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.052818 1429762 logs.go:138] Found kubelet problem: May 28 22:03:13 old-k8s-version-292036 kubelet[1208]: E0528 22:03:13.255557    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.055089 1429762 logs.go:138] Found kubelet problem: May 28 22:03:20 old-k8s-version-292036 kubelet[1208]: E0528 22:03:20.709611    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:53.055277 1429762 logs.go:138] Found kubelet problem: May 28 22:03:28 old-k8s-version-292036 kubelet[1208]: E0528 22:03:28.256445    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.055476 1429762 logs.go:138] Found kubelet problem: May 28 22:03:34 old-k8s-version-292036 kubelet[1208]: E0528 22:03:34.263336    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.057559 1429762 logs.go:138] Found kubelet problem: May 28 22:03:42 old-k8s-version-292036 kubelet[1208]: E0528 22:03:42.283007    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:53.057757 1429762 logs.go:138] Found kubelet problem: May 28 22:03:47 old-k8s-version-292036 kubelet[1208]: E0528 22:03:47.255783    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.057945 1429762 logs.go:138] Found kubelet problem: May 28 22:03:53 old-k8s-version-292036 kubelet[1208]: E0528 22:03:53.254587    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.058152 1429762 logs.go:138] Found kubelet problem: May 28 22:03:58 old-k8s-version-292036 kubelet[1208]: E0528 22:03:58.254212    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.058340 1429762 logs.go:138] Found kubelet problem: May 28 22:04:04 old-k8s-version-292036 kubelet[1208]: E0528 22:04:04.254475    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.060596 1429762 logs.go:138] Found kubelet problem: May 28 22:04:10 old-k8s-version-292036 kubelet[1208]: E0528 22:04:10.693145    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:53.060783 1429762 logs.go:138] Found kubelet problem: May 28 22:04:19 old-k8s-version-292036 kubelet[1208]: E0528 22:04:19.254270    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.060981 1429762 logs.go:138] Found kubelet problem: May 28 22:04:24 old-k8s-version-292036 kubelet[1208]: E0528 22:04:24.254178    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.061168 1429762 logs.go:138] Found kubelet problem: May 28 22:04:31 old-k8s-version-292036 kubelet[1208]: E0528 22:04:31.254447    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.061370 1429762 logs.go:138] Found kubelet problem: May 28 22:04:36 old-k8s-version-292036 kubelet[1208]: E0528 22:04:36.263663    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.061556 1429762 logs.go:138] Found kubelet problem: May 28 22:04:45 old-k8s-version-292036 kubelet[1208]: E0528 22:04:45.254653    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.061753 1429762 logs.go:138] Found kubelet problem: May 28 22:04:50 old-k8s-version-292036 kubelet[1208]: E0528 22:04:50.254303    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.061937 1429762 logs.go:138] Found kubelet problem: May 28 22:04:58 old-k8s-version-292036 kubelet[1208]: E0528 22:04:58.254565    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.062144 1429762 logs.go:138] Found kubelet problem: May 28 22:05:01 old-k8s-version-292036 kubelet[1208]: E0528 22:05:01.255269    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.064248 1429762 logs.go:138] Found kubelet problem: May 28 22:05:12 old-k8s-version-292036 kubelet[1208]: E0528 22:05:12.271324    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:53.064452 1429762 logs.go:138] Found kubelet problem: May 28 22:05:16 old-k8s-version-292036 kubelet[1208]: E0528 22:05:16.254290    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.064642 1429762 logs.go:138] Found kubelet problem: May 28 22:05:25 old-k8s-version-292036 kubelet[1208]: E0528 22:05:25.260346    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.064840 1429762 logs.go:138] Found kubelet problem: May 28 22:05:30 old-k8s-version-292036 kubelet[1208]: E0528 22:05:30.254264    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.065028 1429762 logs.go:138] Found kubelet problem: May 28 22:05:38 old-k8s-version-292036 kubelet[1208]: E0528 22:05:38.258801    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.067327 1429762 logs.go:138] Found kubelet problem: May 28 22:05:42 old-k8s-version-292036 kubelet[1208]: E0528 22:05:42.722092    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:53.067517 1429762 logs.go:138] Found kubelet problem: May 28 22:05:50 old-k8s-version-292036 kubelet[1208]: E0528 22:05:50.254810    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.067715 1429762 logs.go:138] Found kubelet problem: May 28 22:05:56 old-k8s-version-292036 kubelet[1208]: E0528 22:05:56.262695    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.067905 1429762 logs.go:138] Found kubelet problem: May 28 22:06:01 old-k8s-version-292036 kubelet[1208]: E0528 22:06:01.254241    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.068110 1429762 logs.go:138] Found kubelet problem: May 28 22:06:09 old-k8s-version-292036 kubelet[1208]: E0528 22:06:09.281534    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.068295 1429762 logs.go:138] Found kubelet problem: May 28 22:06:15 old-k8s-version-292036 kubelet[1208]: E0528 22:06:15.254396    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.068493 1429762 logs.go:138] Found kubelet problem: May 28 22:06:21 old-k8s-version-292036 kubelet[1208]: E0528 22:06:21.257196    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.068679 1429762 logs.go:138] Found kubelet problem: May 28 22:06:30 old-k8s-version-292036 kubelet[1208]: E0528 22:06:30.254217    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.068878 1429762 logs.go:138] Found kubelet problem: May 28 22:06:36 old-k8s-version-292036 kubelet[1208]: E0528 22:06:36.254429    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.069068 1429762 logs.go:138] Found kubelet problem: May 28 22:06:42 old-k8s-version-292036 kubelet[1208]: E0528 22:06:42.254602    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.069265 1429762 logs.go:138] Found kubelet problem: May 28 22:06:50 old-k8s-version-292036 kubelet[1208]: E0528 22:06:50.262748    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.069452 1429762 logs.go:138] Found kubelet problem: May 28 22:06:55 old-k8s-version-292036 kubelet[1208]: E0528 22:06:55.258724    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.069649 1429762 logs.go:138] Found kubelet problem: May 28 22:07:03 old-k8s-version-292036 kubelet[1208]: E0528 22:07:03.262974    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.069835 1429762 logs.go:138] Found kubelet problem: May 28 22:07:10 old-k8s-version-292036 kubelet[1208]: E0528 22:07:10.255540    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070040 1429762 logs.go:138] Found kubelet problem: May 28 22:07:16 old-k8s-version-292036 kubelet[1208]: E0528 22:07:16.254105    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070230 1429762 logs.go:138] Found kubelet problem: May 28 22:07:24 old-k8s-version-292036 kubelet[1208]: E0528 22:07:24.254562    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070429 1429762 logs.go:138] Found kubelet problem: May 28 22:07:27 old-k8s-version-292036 kubelet[1208]: E0528 22:07:27.254400    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070614 1429762 logs.go:138] Found kubelet problem: May 28 22:07:35 old-k8s-version-292036 kubelet[1208]: E0528 22:07:35.254891    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070813 1429762 logs.go:138] Found kubelet problem: May 28 22:07:38 old-k8s-version-292036 kubelet[1208]: E0528 22:07:38.254217    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070998 1429762 logs.go:138] Found kubelet problem: May 28 22:07:49 old-k8s-version-292036 kubelet[1208]: E0528 22:07:49.262987    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0528 22:07:53.071008 1429762 logs.go:123] Gathering logs for kube-apiserver [6f5dbe5b1578] ...
	I0528 22:07:53.071023 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f5dbe5b1578"
	I0528 22:07:53.128865 1429762 logs.go:123] Gathering logs for etcd [ddf1864687ea] ...
	I0528 22:07:53.128895 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf1864687ea"
	I0528 22:07:53.154934 1429762 logs.go:123] Gathering logs for etcd [b3da6daaeceb] ...
	I0528 22:07:53.154967 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3da6daaeceb"
	I0528 22:07:53.178801 1429762 logs.go:123] Gathering logs for coredns [11f08e40d7a4] ...
	I0528 22:07:53.178830 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f08e40d7a4"
	I0528 22:07:53.199635 1429762 logs.go:123] Gathering logs for coredns [adc72a271675] ...
	I0528 22:07:53.199666 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adc72a271675"
	I0528 22:07:53.222277 1429762 logs.go:123] Gathering logs for kube-scheduler [f10e2010fb5d] ...
	I0528 22:07:53.222306 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f10e2010fb5d"
	I0528 22:07:53.248483 1429762 logs.go:123] Gathering logs for kube-controller-manager [7f2f40603d14] ...
	I0528 22:07:53.248512 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2f40603d14"
	I0528 22:07:53.304485 1429762 logs.go:123] Gathering logs for kubernetes-dashboard [34ca6395814f] ...
	I0528 22:07:53.304517 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ca6395814f"
	I0528 22:07:53.328266 1429762 logs.go:123] Gathering logs for dmesg ...
	I0528 22:07:53.328297 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 22:07:53.349197 1429762 logs.go:123] Gathering logs for kube-apiserver [2ca82d3e185e] ...
	I0528 22:07:53.349227 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ca82d3e185e"
	I0528 22:07:53.434007 1429762 logs.go:123] Gathering logs for kube-proxy [7072b62ac073] ...
	I0528 22:07:53.434055 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7072b62ac073"
	I0528 22:07:53.459969 1429762 logs.go:123] Gathering logs for storage-provisioner [62900a727b01] ...
	I0528 22:07:53.460000 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62900a727b01"
	I0528 22:07:53.479881 1429762 logs.go:123] Gathering logs for storage-provisioner [ba1d077e39b4] ...
	I0528 22:07:53.479912 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1d077e39b4"
	I0528 22:07:53.501261 1429762 logs.go:123] Gathering logs for Docker ...
	I0528 22:07:53.501338 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0528 22:07:53.533031 1429762 logs.go:123] Gathering logs for describe nodes ...
	I0528 22:07:53.533065 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 22:07:53.679809 1429762 logs.go:123] Gathering logs for container status ...
	I0528 22:07:53.679841 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 22:07:53.733270 1429762 out.go:304] Setting ErrFile to fd 2...
	I0528 22:07:53.733296 1429762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 22:07:53.733350 1429762 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0528 22:07:53.733365 1429762 out.go:239]   May 28 22:07:24 old-k8s-version-292036 kubelet[1208]: E0528 22:07:24.254562    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 28 22:07:24 old-k8s-version-292036 kubelet[1208]: E0528 22:07:24.254562    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.733374 1429762 out.go:239]   May 28 22:07:27 old-k8s-version-292036 kubelet[1208]: E0528 22:07:27.254400    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 28 22:07:27 old-k8s-version-292036 kubelet[1208]: E0528 22:07:27.254400    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.733387 1429762 out.go:239]   May 28 22:07:35 old-k8s-version-292036 kubelet[1208]: E0528 22:07:35.254891    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 28 22:07:35 old-k8s-version-292036 kubelet[1208]: E0528 22:07:35.254891    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.733396 1429762 out.go:239]   May 28 22:07:38 old-k8s-version-292036 kubelet[1208]: E0528 22:07:38.254217    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 28 22:07:38 old-k8s-version-292036 kubelet[1208]: E0528 22:07:38.254217    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.733419 1429762 out.go:239]   May 28 22:07:49 old-k8s-version-292036 kubelet[1208]: E0528 22:07:49.262987    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 28 22:07:49 old-k8s-version-292036 kubelet[1208]: E0528 22:07:49.262987    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0528 22:07:53.733426 1429762 out.go:304] Setting ErrFile to fd 2...
	I0528 22:07:53.733434 1429762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:08:03.735508 1429762 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0528 22:08:03.745216 1429762 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0528 22:08:03.747464 1429762 out.go:177] 
	W0528 22:08:03.749118 1429762 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0528 22:08:03.749161 1429762 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0528 22:08:03.749190 1429762 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0528 22:08:03.749196 1429762 out.go:239] * 
	* 
	W0528 22:08:03.750161 1429762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 22:08:03.752030 1429762 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-292036 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-292036
helpers_test.go:235: (dbg) docker inspect old-k8s-version-292036:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2a68ea2bb8da5fa72fbbc90f50057f93cce6727fb42767e3d073cff3cfe76ff2",
	        "Created": "2024-05-28T21:59:03.292006525Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1429947,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-28T22:01:52.408970783Z",
	            "FinishedAt": "2024-05-28T22:01:51.365843525Z"
	        },
	        "Image": "sha256:acea75078737755d2f999491dfa245ea1d1040bffc73283b8c9ba9ff1fde89b5",
	        "ResolvConfPath": "/var/lib/docker/containers/2a68ea2bb8da5fa72fbbc90f50057f93cce6727fb42767e3d073cff3cfe76ff2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2a68ea2bb8da5fa72fbbc90f50057f93cce6727fb42767e3d073cff3cfe76ff2/hostname",
	        "HostsPath": "/var/lib/docker/containers/2a68ea2bb8da5fa72fbbc90f50057f93cce6727fb42767e3d073cff3cfe76ff2/hosts",
	        "LogPath": "/var/lib/docker/containers/2a68ea2bb8da5fa72fbbc90f50057f93cce6727fb42767e3d073cff3cfe76ff2/2a68ea2bb8da5fa72fbbc90f50057f93cce6727fb42767e3d073cff3cfe76ff2-json.log",
	        "Name": "/old-k8s-version-292036",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-292036:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-292036",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/806f542099345972b368bda6b848034cdbddae3cb373cf0d44bd62bed86c39e3-init/diff:/var/lib/docker/overlay2/8e655f7297a0818a5a7e390e8907c6f4d26023cd8c9930299bc7c4352e4766d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/806f542099345972b368bda6b848034cdbddae3cb373cf0d44bd62bed86c39e3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/806f542099345972b368bda6b848034cdbddae3cb373cf0d44bd62bed86c39e3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/806f542099345972b368bda6b848034cdbddae3cb373cf0d44bd62bed86c39e3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-292036",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-292036/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-292036",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-292036",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-292036",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b14ad33f032f3a6ff3ddc33d2f39d01f890e44aba7502e26d6482af85b0de10",
	            "SandboxKey": "/var/run/docker/netns/8b14ad33f032",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34292"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34291"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34288"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34290"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34289"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-292036": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "2338b55e834b75cf3dc3955069f0903b6fa3d6d2863b104b22c0b47d4ef35116",
	                    "EndpointID": "3f4a53f60fd6848cd0f6b65afca47ee6deb5222e18de4c5ebad9bf8f427c8215",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-292036",
	                        "2a68ea2bb8da"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-292036 -n old-k8s-version-292036
E0528 22:08:03.960546 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-292036 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-292036 logs -n 25: (1.217944529s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-608294 sudo cat                              | bridge-608294                | jenkins | v1.33.1 | 28 May 24 21:59 UTC | 28 May 24 21:59 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-608294 sudo                                  | bridge-608294                | jenkins | v1.33.1 | 28 May 24 21:59 UTC | 28 May 24 21:59 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-608294 sudo                                  | bridge-608294                | jenkins | v1.33.1 | 28 May 24 21:59 UTC |                     |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-608294 sudo                                  | bridge-608294                | jenkins | v1.33.1 | 28 May 24 21:59 UTC | 28 May 24 21:59 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-608294 sudo find                             | bridge-608294                | jenkins | v1.33.1 | 28 May 24 21:59 UTC | 28 May 24 21:59 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-608294 sudo crio                             | bridge-608294                | jenkins | v1.33.1 | 28 May 24 21:59 UTC | 28 May 24 21:59 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-608294                                       | bridge-608294                | jenkins | v1.33.1 | 28 May 24 21:59 UTC | 28 May 24 21:59 UTC |
	| start   | -p embed-certs-438399                                  | embed-certs-438399           | jenkins | v1.33.1 | 28 May 24 21:59 UTC | 28 May 24 22:01 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-438399            | embed-certs-438399           | jenkins | v1.33.1 | 28 May 24 22:01 UTC | 28 May 24 22:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-438399                                  | embed-certs-438399           | jenkins | v1.33.1 | 28 May 24 22:01 UTC | 28 May 24 22:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-438399                 | embed-certs-438399           | jenkins | v1.33.1 | 28 May 24 22:01 UTC | 28 May 24 22:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-438399                                  | embed-certs-438399           | jenkins | v1.33.1 | 28 May 24 22:01 UTC | 28 May 24 22:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-292036        | old-k8s-version-292036       | jenkins | v1.33.1 | 28 May 24 22:01 UTC | 28 May 24 22:01 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-292036                              | old-k8s-version-292036       | jenkins | v1.33.1 | 28 May 24 22:01 UTC | 28 May 24 22:01 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-292036             | old-k8s-version-292036       | jenkins | v1.33.1 | 28 May 24 22:01 UTC | 28 May 24 22:01 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-292036                              | old-k8s-version-292036       | jenkins | v1.33.1 | 28 May 24 22:01 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| image   | embed-certs-438399 image list                          | embed-certs-438399           | jenkins | v1.33.1 | 28 May 24 22:06 UTC | 28 May 24 22:06 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-438399                                  | embed-certs-438399           | jenkins | v1.33.1 | 28 May 24 22:06 UTC | 28 May 24 22:06 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-438399                                  | embed-certs-438399           | jenkins | v1.33.1 | 28 May 24 22:06 UTC | 28 May 24 22:06 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-438399                                  | embed-certs-438399           | jenkins | v1.33.1 | 28 May 24 22:06 UTC | 28 May 24 22:06 UTC |
	| delete  | -p embed-certs-438399                                  | embed-certs-438399           | jenkins | v1.33.1 | 28 May 24 22:06 UTC | 28 May 24 22:06 UTC |
	| delete  | -p                                                     | disable-driver-mounts-501801 | jenkins | v1.33.1 | 28 May 24 22:06 UTC | 28 May 24 22:06 UTC |
	|         | disable-driver-mounts-501801                           |                              |         |         |                     |                     |
	| start   | -p no-preload-930485                                   | no-preload-930485            | jenkins | v1.33.1 | 28 May 24 22:06 UTC | 28 May 24 22:07 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-930485             | no-preload-930485            | jenkins | v1.33.1 | 28 May 24 22:07 UTC | 28 May 24 22:07 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-930485                                   | no-preload-930485            | jenkins | v1.33.1 | 28 May 24 22:07 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 22:06:38
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 22:06:38.945325 1437835 out.go:291] Setting OutFile to fd 1 ...
	I0528 22:06:38.945524 1437835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:06:38.945555 1437835 out.go:304] Setting ErrFile to fd 2...
	I0528 22:06:38.945577 1437835 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:06:38.945941 1437835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	I0528 22:06:38.946466 1437835 out.go:298] Setting JSON to false
	I0528 22:06:38.947608 1437835 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20948,"bootTime":1716913051,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0528 22:06:38.947735 1437835 start.go:139] virtualization:  
	I0528 22:06:38.951081 1437835 out.go:177] * [no-preload-930485] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 22:06:38.954576 1437835 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 22:06:38.956653 1437835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 22:06:38.954667 1437835 notify.go:220] Checking for updates...
	I0528 22:06:38.961198 1437835 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 22:06:38.962957 1437835 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	I0528 22:06:38.965033 1437835 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 22:06:38.967027 1437835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 22:06:38.970323 1437835 config.go:182] Loaded profile config "old-k8s-version-292036": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0528 22:06:38.970423 1437835 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 22:06:38.990296 1437835 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 22:06:38.990436 1437835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 22:06:39.057271 1437835 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 22:06:39.046914662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 22:06:39.057422 1437835 docker.go:295] overlay module found
	I0528 22:06:39.063476 1437835 out.go:177] * Using the docker driver based on user configuration
	I0528 22:06:39.065724 1437835 start.go:297] selected driver: docker
	I0528 22:06:39.065746 1437835 start.go:901] validating driver "docker" against <nil>
	I0528 22:06:39.065762 1437835 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 22:06:39.066607 1437835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 22:06:39.146736 1437835 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 22:06:39.13626231 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 22:06:39.146965 1437835 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 22:06:39.147306 1437835 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 22:06:39.150302 1437835 out.go:177] * Using Docker driver with root privileges
	I0528 22:06:39.152426 1437835 cni.go:84] Creating CNI manager for ""
	I0528 22:06:39.152460 1437835 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 22:06:39.152475 1437835 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 22:06:39.152566 1437835 start.go:340] cluster config:
	{Name:no-preload-930485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-930485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:06:39.154580 1437835 out.go:177] * Starting "no-preload-930485" primary control-plane node in "no-preload-930485" cluster
	I0528 22:06:39.156754 1437835 cache.go:121] Beginning downloading kic base image for docker with docker
	I0528 22:06:39.159099 1437835 out.go:177] * Pulling base image v0.0.44-1716228441-18934 ...
	I0528 22:06:39.161422 1437835 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 22:06:39.161568 1437835 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/config.json ...
	I0528 22:06:39.161620 1437835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/config.json: {Name:mk797dce895fa88b8e0b19a1b2e076b0a707bb0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:06:39.161745 1437835 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 22:06:39.162086 1437835 cache.go:107] acquiring lock: {Name:mk6c6f070861989ba9aa49cd705ea60bf613391f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:06:39.162273 1437835 cache.go:107] acquiring lock: {Name:mk14de0be79d1f14f595d5d4d5ef889f53c36221 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:06:39.162632 1437835 cache.go:107] acquiring lock: {Name:mkefff2528c2e9fd3815415b648d47cbc645d912 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:06:39.162810 1437835 cache.go:115] /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0528 22:06:39.162838 1437835 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 756.898µs
	I0528 22:06:39.162995 1437835 cache.go:107] acquiring lock: {Name:mkd79aec8dab9fbac4a8b34aa594b6fc351985cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:06:39.163200 1437835 cache.go:107] acquiring lock: {Name:mk0d0d69845f6c884d13c0774516ab47dd7d975f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:06:39.163338 1437835 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 22:06:39.163651 1437835 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0528 22:06:39.163741 1437835 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0528 22:06:39.164082 1437835 cache.go:107] acquiring lock: {Name:mk9cb0a37c89e23d85914781fd80e58bf8f0ffad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:06:39.164257 1437835 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0528 22:06:39.164589 1437835 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0528 22:06:39.164513 1437835 cache.go:107] acquiring lock: {Name:mkd00f7c3f49b2e54cec012e6163c3521675e7a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:06:39.165450 1437835 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0528 22:06:39.165758 1437835 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0528 22:06:39.166101 1437835 cache.go:107] acquiring lock: {Name:mke600392441b6575176b8e9b44c224361e8139a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:06:39.167522 1437835 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0528 22:06:39.171329 1437835 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 22:06:39.172155 1437835 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0528 22:06:39.172437 1437835 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0528 22:06:39.172672 1437835 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0528 22:06:39.173653 1437835 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0528 22:06:39.173931 1437835 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0528 22:06:39.174188 1437835 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0528 22:06:39.191068 1437835 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon, skipping pull
	I0528 22:06:39.191091 1437835 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 exists in daemon, skipping load
	I0528 22:06:39.191109 1437835 cache.go:194] Successfully downloaded all kic artifacts
	I0528 22:06:39.191139 1437835 start.go:360] acquireMachinesLock for no-preload-930485: {Name:mk27c586c15812bd20d754e4367c5b94a2e98589 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0528 22:06:39.191500 1437835 start.go:364] duration metric: took 340.045µs to acquireMachinesLock for "no-preload-930485"
	I0528 22:06:39.191601 1437835 start.go:93] Provisioning new machine with config: &{Name:no-preload-930485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-930485 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 22:06:39.191911 1437835 start.go:125] createHost starting for "" (driver="docker")
	I0528 22:06:38.041650 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:40.046898 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:39.197592 1437835 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0528 22:06:39.197848 1437835 start.go:159] libmachine.API.Create for "no-preload-930485" (driver="docker")
	I0528 22:06:39.197877 1437835 client.go:168] LocalClient.Create starting
	I0528 22:06:39.197941 1437835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem
	I0528 22:06:39.197977 1437835 main.go:141] libmachine: Decoding PEM data...
	I0528 22:06:39.197994 1437835 main.go:141] libmachine: Parsing certificate...
	I0528 22:06:39.198115 1437835 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem
	I0528 22:06:39.198137 1437835 main.go:141] libmachine: Decoding PEM data...
	I0528 22:06:39.198147 1437835 main.go:141] libmachine: Parsing certificate...
	I0528 22:06:39.198546 1437835 cli_runner.go:164] Run: docker network inspect no-preload-930485 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0528 22:06:39.240244 1437835 cli_runner.go:211] docker network inspect no-preload-930485 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0528 22:06:39.240323 1437835 network_create.go:281] running [docker network inspect no-preload-930485] to gather additional debugging logs...
	I0528 22:06:39.240348 1437835 cli_runner.go:164] Run: docker network inspect no-preload-930485
	W0528 22:06:39.279587 1437835 cli_runner.go:211] docker network inspect no-preload-930485 returned with exit code 1
	I0528 22:06:39.279619 1437835 network_create.go:284] error running [docker network inspect no-preload-930485]: docker network inspect no-preload-930485: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-930485 not found
	I0528 22:06:39.279632 1437835 network_create.go:286] output of [docker network inspect no-preload-930485]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-930485 not found
	
	** /stderr **
	I0528 22:06:39.279741 1437835 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 22:06:39.296138 1437835 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4dee3fd2657d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:4b:dd:92:37} reservation:<nil>}
	I0528 22:06:39.296508 1437835 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-019a45eceacd IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:38:99:ae:05} reservation:<nil>}
	I0528 22:06:39.296921 1437835 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3aeed4f51df6 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ba:26:08:e4} reservation:<nil>}
	I0528 22:06:39.297432 1437835 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a6b0f0}
	I0528 22:06:39.297458 1437835 network_create.go:124] attempt to create docker network no-preload-930485 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0528 22:06:39.297531 1437835 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-930485 no-preload-930485
	I0528 22:06:39.389900 1437835 network_create.go:108] docker network no-preload-930485 192.168.76.0/24 created
	I0528 22:06:39.389932 1437835 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-930485" container
	I0528 22:06:39.390060 1437835 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0528 22:06:39.426869 1437835 cli_runner.go:164] Run: docker volume create no-preload-930485 --label name.minikube.sigs.k8s.io=no-preload-930485 --label created_by.minikube.sigs.k8s.io=true
	I0528 22:06:39.448618 1437835 oci.go:103] Successfully created a docker volume no-preload-930485
	I0528 22:06:39.448721 1437835 cli_runner.go:164] Run: docker run --rm --name no-preload-930485-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-930485 --entrypoint /usr/bin/test -v no-preload-930485:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 -d /var/lib
	I0528 22:06:39.526605 1437835 cache.go:162] opening:  /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1
	I0528 22:06:39.542718 1437835 cache.go:162] opening:  /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0528 22:06:39.547893 1437835 cache.go:162] opening:  /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0528 22:06:39.567520 1437835 cache.go:162] opening:  /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0528 22:06:39.570600 1437835 cache.go:162] opening:  /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1
	I0528 22:06:39.577557 1437835 cache.go:162] opening:  /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0528 22:06:39.577729 1437835 cache.go:162] opening:  /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1
	I0528 22:06:39.625623 1437835 cache.go:157] /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0528 22:06:39.625653 1437835 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 461.586133ms
	I0528 22:06:39.625667 1437835 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0528 22:06:40.283088 1437835 oci.go:107] Successfully prepared a docker volume no-preload-930485
	I0528 22:06:40.283123 1437835 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	W0528 22:06:40.286170 1437835 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0528 22:06:40.286368 1437835 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0528 22:06:40.379057 1437835 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-930485 --name no-preload-930485 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-930485 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-930485 --network no-preload-930485 --ip 192.168.76.2 --volume no-preload-930485:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862
	I0528 22:06:40.457313 1437835 cache.go:157] /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0528 22:06:40.457391 1437835 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.291296528s
	I0528 22:06:40.457417 1437835 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0528 22:06:40.494813 1437835 cache.go:157] /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0528 22:06:40.494842 1437835 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1" took 1.331651191s
	I0528 22:06:40.494887 1437835 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0528 22:06:40.678726 1437835 cache.go:157] /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0528 22:06:40.678772 1437835 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1" took 1.515782379s
	I0528 22:06:40.678792 1437835 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0528 22:06:40.975067 1437835 cli_runner.go:164] Run: docker container inspect no-preload-930485 --format={{.State.Running}}
	I0528 22:06:40.997283 1437835 cache.go:157] /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0528 22:06:40.997310 1437835 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1" took 1.834681791s
	I0528 22:06:40.997323 1437835 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0528 22:06:41.002718 1437835 cli_runner.go:164] Run: docker container inspect no-preload-930485 --format={{.State.Status}}
	I0528 22:06:41.030348 1437835 cli_runner.go:164] Run: docker exec no-preload-930485 stat /var/lib/dpkg/alternatives/iptables
	I0528 22:06:41.138251 1437835 oci.go:144] the created container "no-preload-930485" has a running status.
	I0528 22:06:41.138315 1437835 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/no-preload-930485/id_rsa...
	I0528 22:06:41.332237 1437835 cache.go:157] /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0528 22:06:41.332271 1437835 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1" took 2.170008021s
	I0528 22:06:41.332285 1437835 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0528 22:06:41.938895 1437835 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/no-preload-930485/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0528 22:06:41.968523 1437835 cli_runner.go:164] Run: docker container inspect no-preload-930485 --format={{.State.Status}}
	I0528 22:06:41.994447 1437835 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0528 22:06:41.994467 1437835 kic_runner.go:114] Args: [docker exec --privileged no-preload-930485 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0528 22:06:42.040665 1437835 cache.go:157] /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 exists
	I0528 22:06:42.040794 1437835 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0" took 2.876284205s
	I0528 22:06:42.040823 1437835 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0528 22:06:42.040871 1437835 cache.go:87] Successfully saved all images to host disk.
	I0528 22:06:42.095000 1437835 cli_runner.go:164] Run: docker container inspect no-preload-930485 --format={{.State.Status}}
	I0528 22:06:42.122295 1437835 machine.go:94] provisionDockerMachine start ...
	I0528 22:06:42.122398 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:42.143553 1437835 main.go:141] libmachine: Using SSH client type: native
	I0528 22:06:42.143874 1437835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34297 <nil> <nil>}
	I0528 22:06:42.143884 1437835 main.go:141] libmachine: About to run SSH command:
	hostname
	I0528 22:06:42.298906 1437835 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-930485
	
	I0528 22:06:42.298974 1437835 ubuntu.go:169] provisioning hostname "no-preload-930485"
	I0528 22:06:42.299077 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:42.321838 1437835 main.go:141] libmachine: Using SSH client type: native
	I0528 22:06:42.322103 1437835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34297 <nil> <nil>}
	I0528 22:06:42.322121 1437835 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-930485 && echo "no-preload-930485" | sudo tee /etc/hostname
	I0528 22:06:42.473882 1437835 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-930485
	
	I0528 22:06:42.473969 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:42.491102 1437835 main.go:141] libmachine: Using SSH client type: native
	I0528 22:06:42.491360 1437835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34297 <nil> <nil>}
	I0528 22:06:42.491385 1437835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-930485' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-930485/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-930485' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0528 22:06:42.614978 1437835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0528 22:06:42.615003 1437835 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18966-1064873/.minikube CaCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18966-1064873/.minikube}
	I0528 22:06:42.615031 1437835 ubuntu.go:177] setting up certificates
	I0528 22:06:42.615046 1437835 provision.go:84] configureAuth start
	I0528 22:06:42.615116 1437835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-930485
	I0528 22:06:42.635641 1437835 provision.go:143] copyHostCerts
	I0528 22:06:42.635715 1437835 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1064873/.minikube/cert.pem, removing ...
	I0528 22:06:42.635730 1437835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1064873/.minikube/cert.pem
	I0528 22:06:42.635808 1437835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/cert.pem (1123 bytes)
	I0528 22:06:42.635909 1437835 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1064873/.minikube/key.pem, removing ...
	I0528 22:06:42.635921 1437835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1064873/.minikube/key.pem
	I0528 22:06:42.635949 1437835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/key.pem (1679 bytes)
	I0528 22:06:42.636009 1437835 exec_runner.go:144] found /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.pem, removing ...
	I0528 22:06:42.636018 1437835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.pem
	I0528 22:06:42.636052 1437835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.pem (1078 bytes)
	I0528 22:06:42.636109 1437835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem org=jenkins.no-preload-930485 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-930485]
	I0528 22:06:43.129177 1437835 provision.go:177] copyRemoteCerts
	I0528 22:06:43.129245 1437835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0528 22:06:43.129288 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:43.149572 1437835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34297 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/no-preload-930485/id_rsa Username:docker}
	I0528 22:06:43.239032 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0528 22:06:43.274133 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0528 22:06:43.300842 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0528 22:06:43.326269 1437835 provision.go:87] duration metric: took 711.201741ms to configureAuth
	I0528 22:06:43.326298 1437835 ubuntu.go:193] setting minikube options for container-runtime
	I0528 22:06:43.326489 1437835 config.go:182] Loaded profile config "no-preload-930485": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 22:06:43.326562 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:43.342626 1437835 main.go:141] libmachine: Using SSH client type: native
	I0528 22:06:43.342899 1437835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34297 <nil> <nil>}
	I0528 22:06:43.342915 1437835 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0528 22:06:43.470524 1437835 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0528 22:06:43.470544 1437835 ubuntu.go:71] root file system type: overlay
	I0528 22:06:43.470647 1437835 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0528 22:06:43.470715 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:43.487304 1437835 main.go:141] libmachine: Using SSH client type: native
	I0528 22:06:43.487554 1437835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34297 <nil> <nil>}
	I0528 22:06:43.487637 1437835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0528 22:06:43.622371 1437835 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0528 22:06:43.622463 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:43.646269 1437835 main.go:141] libmachine: Using SSH client type: native
	I0528 22:06:43.646519 1437835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 34297 <nil> <nil>}
	I0528 22:06:43.646544 1437835 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0528 22:06:42.055586 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:44.540006 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:44.417318 1437835 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-16 08:38:08.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-28 22:06:43.615785232 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0528 22:06:44.417352 1437835 machine.go:97] duration metric: took 2.295035045s to provisionDockerMachine
	I0528 22:06:44.417364 1437835 client.go:171] duration metric: took 5.219480852s to LocalClient.Create
	I0528 22:06:44.417376 1437835 start.go:167] duration metric: took 5.219529204s to libmachine.API.Create "no-preload-930485"
	I0528 22:06:44.417383 1437835 start.go:293] postStartSetup for "no-preload-930485" (driver="docker")
	I0528 22:06:44.417395 1437835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0528 22:06:44.417468 1437835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0528 22:06:44.417514 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:44.436299 1437835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34297 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/no-preload-930485/id_rsa Username:docker}
	I0528 22:06:44.527508 1437835 ssh_runner.go:195] Run: cat /etc/os-release
	I0528 22:06:44.530879 1437835 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0528 22:06:44.530913 1437835 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0528 22:06:44.530925 1437835 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0528 22:06:44.530932 1437835 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0528 22:06:44.530942 1437835 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1064873/.minikube/addons for local assets ...
	I0528 22:06:44.531003 1437835 filesync.go:126] Scanning /home/jenkins/minikube-integration/18966-1064873/.minikube/files for local assets ...
	I0528 22:06:44.531085 1437835 filesync.go:149] local asset: /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem -> 10703092.pem in /etc/ssl/certs
	I0528 22:06:44.531185 1437835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0528 22:06:44.542709 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem --> /etc/ssl/certs/10703092.pem (1708 bytes)
	I0528 22:06:44.567456 1437835 start.go:296] duration metric: took 150.058057ms for postStartSetup
	I0528 22:06:44.567814 1437835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-930485
	I0528 22:06:44.584365 1437835 profile.go:143] Saving config to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/config.json ...
	I0528 22:06:44.584630 1437835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 22:06:44.584684 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:44.603371 1437835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34297 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/no-preload-930485/id_rsa Username:docker}
	I0528 22:06:44.694498 1437835 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0528 22:06:44.701136 1437835 start.go:128] duration metric: took 5.509205986s to createHost
	I0528 22:06:44.701164 1437835 start.go:83] releasing machines lock for "no-preload-930485", held for 5.509649742s
	I0528 22:06:44.701262 1437835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-930485
	I0528 22:06:44.724003 1437835 ssh_runner.go:195] Run: cat /version.json
	I0528 22:06:44.724064 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:44.724317 1437835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0528 22:06:44.724376 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:44.740945 1437835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34297 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/no-preload-930485/id_rsa Username:docker}
	I0528 22:06:44.741475 1437835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34297 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/no-preload-930485/id_rsa Username:docker}
	I0528 22:06:44.829444 1437835 ssh_runner.go:195] Run: systemctl --version
	I0528 22:06:44.961694 1437835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0528 22:06:44.966189 1437835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0528 22:06:44.992710 1437835 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0528 22:06:44.992799 1437835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0528 22:06:45.049671 1437835 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0528 22:06:45.049699 1437835 start.go:494] detecting cgroup driver to use...
	I0528 22:06:45.049740 1437835 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 22:06:45.049854 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 22:06:45.078416 1437835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0528 22:06:45.113539 1437835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0528 22:06:45.127947 1437835 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0528 22:06:45.128126 1437835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0528 22:06:45.142709 1437835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 22:06:45.155849 1437835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0528 22:06:45.169631 1437835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0528 22:06:45.184376 1437835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0528 22:06:45.196753 1437835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0528 22:06:45.210313 1437835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0528 22:06:45.225206 1437835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0528 22:06:45.245878 1437835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0528 22:06:45.258268 1437835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0528 22:06:45.272562 1437835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:06:45.378943 1437835 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0528 22:06:45.487503 1437835 start.go:494] detecting cgroup driver to use...
	I0528 22:06:45.487617 1437835 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0528 22:06:45.487734 1437835 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0528 22:06:45.507040 1437835 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0528 22:06:45.507161 1437835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0528 22:06:45.526278 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0528 22:06:45.553940 1437835 ssh_runner.go:195] Run: which cri-dockerd
	I0528 22:06:45.558564 1437835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0528 22:06:45.570882 1437835 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0528 22:06:45.596171 1437835 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0528 22:06:45.719775 1437835 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0528 22:06:45.822509 1437835 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0528 22:06:45.822646 1437835 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0528 22:06:45.847599 1437835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:06:45.943716 1437835 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0528 22:06:46.206657 1437835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0528 22:06:46.219595 1437835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 22:06:46.232938 1437835 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0528 22:06:46.331485 1437835 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0528 22:06:46.434274 1437835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:06:46.526442 1437835 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0528 22:06:46.552613 1437835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0528 22:06:46.564705 1437835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:06:46.657775 1437835 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0528 22:06:46.761092 1437835 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0528 22:06:46.761170 1437835 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0528 22:06:46.766603 1437835 start.go:562] Will wait 60s for crictl version
	I0528 22:06:46.766684 1437835 ssh_runner.go:195] Run: which crictl
	I0528 22:06:46.770631 1437835 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0528 22:06:46.810164 1437835 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.3
	RuntimeApiVersion:  v1
	I0528 22:06:46.810247 1437835 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 22:06:46.831410 1437835 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0528 22:06:46.861623 1437835 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.3 ...
	I0528 22:06:46.861769 1437835 cli_runner.go:164] Run: docker network inspect no-preload-930485 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0528 22:06:46.877702 1437835 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0528 22:06:46.884132 1437835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:06:46.897129 1437835 kubeadm.go:877] updating cluster {Name:no-preload-930485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-930485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0528 22:06:46.897243 1437835 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 22:06:46.897302 1437835 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0528 22:06:46.912589 1437835 docker.go:685] Got preloaded images: 
	I0528 22:06:46.912611 1437835 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.1 wasn't preloaded
	I0528 22:06:46.912625 1437835 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.1 registry.k8s.io/kube-controller-manager:v1.30.1 registry.k8s.io/kube-scheduler:v1.30.1 registry.k8s.io/kube-proxy:v1.30.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0528 22:06:46.915283 1437835 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0528 22:06:46.915518 1437835 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0528 22:06:46.915870 1437835 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 22:06:46.915991 1437835 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0528 22:06:46.916080 1437835 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0528 22:06:46.916181 1437835 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:06:46.916333 1437835 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0528 22:06:46.916522 1437835 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0528 22:06:46.917636 1437835 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0528 22:06:46.917897 1437835 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0528 22:06:46.918125 1437835 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0528 22:06:46.918188 1437835 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0528 22:06:46.918266 1437835 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0528 22:06:46.918371 1437835 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0528 22:06:46.918406 1437835 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:06:46.918457 1437835 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 22:06:47.148555 1437835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0528 22:06:47.177137 1437835 cache_images.go:116] "registry.k8s.io/pause:3.9" needs transfer: "registry.k8s.io/pause:3.9" does not exist at hash "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e" in container runtime
	I0528 22:06:47.177260 1437835 docker.go:337] Removing image: registry.k8s.io/pause:3.9
	I0528 22:06:47.177349 1437835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.9
	I0528 22:06:47.190472 1437835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.1
	I0528 22:06:47.196272 1437835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0528 22:06:47.204956 1437835 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0528 22:06:47.205178 1437835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 22:06:47.205248 1437835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9
	I0528 22:06:47.205460 1437835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.1
	I0528 22:06:47.205141 1437835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0528 22:06:47.207849 1437835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.1
	I0528 22:06:47.225746 1437835 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.1" does not exist at hash "163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a" in container runtime
	I0528 22:06:47.225851 1437835 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.30.1
	I0528 22:06:47.225929 1437835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.30.1
	I0528 22:06:47.281730 1437835 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93" in container runtime
	I0528 22:06:47.281828 1437835 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0528 22:06:47.281918 1437835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0528 22:06:47.284928 1437835 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.1" does not exist at hash "988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee" in container runtime
	I0528 22:06:47.285016 1437835 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.30.1
	I0528 22:06:47.285096 1437835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.30.1
	I0528 22:06:47.285187 1437835 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.9: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.9: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.9': No such file or directory
	I0528 22:06:47.285223 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 --> /var/lib/minikube/images/pause_3.9 (268288 bytes)
	I0528 22:06:47.285326 1437835 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd" in container runtime
	I0528 22:06:47.285363 1437835 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0528 22:06:47.285417 1437835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.12-0
	I0528 22:06:47.285509 1437835 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.1" does not exist at hash "234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4" in container runtime
	I0528 22:06:47.285569 1437835 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 22:06:47.285731 1437835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.30.1
	I0528 22:06:47.285651 1437835 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.1" needs transfer: "registry.k8s.io/kube-proxy:v1.30.1" does not exist at hash "05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee" in container runtime
	I0528 22:06:47.285840 1437835 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1
	I0528 22:06:47.285918 1437835 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.30.1
	I0528 22:06:47.285959 1437835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.30.1
	I0528 22:06:47.286048 1437835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0528 22:06:47.316878 1437835 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0528 22:06:47.316986 1437835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0528 22:06:47.342387 1437835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.30.1': No such file or directory
	I0528 22:06:47.342426 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 --> /var/lib/minikube/images/kube-scheduler_v1.30.1 (17646592 bytes)
	I0528 22:06:47.342522 1437835 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1
	I0528 22:06:47.342593 1437835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0528 22:06:47.342650 1437835 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0528 22:06:47.342698 1437835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0528 22:06:47.342784 1437835 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1
	I0528 22:06:47.342835 1437835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1
	I0528 22:06:47.342889 1437835 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.1': No such file or directory
	I0528 22:06:47.342907 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 --> /var/lib/minikube/images/coredns_v1.11.1 (16488960 bytes)
	I0528 22:06:47.356702 1437835 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0528 22:06:47.356877 1437835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0528 22:06:47.380329 1437835 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.12-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.12-0': No such file or directory
	I0528 22:06:47.380366 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 --> /var/lib/minikube/images/etcd_3.5.12-0 (66196992 bytes)
	I0528 22:06:47.380426 1437835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.30.1': No such file or directory
	I0528 22:06:47.380443 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 --> /var/lib/minikube/images/kube-apiserver_v1.30.1 (29940224 bytes)
	I0528 22:06:47.380481 1437835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.30.1': No such file or directory
	I0528 22:06:47.380499 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 --> /var/lib/minikube/images/kube-proxy_v1.30.1 (25628672 bytes)
	I0528 22:06:47.408419 1437835 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.9
	I0528 22:06:47.408456 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.9 | docker load"
	W0528 22:06:47.433952 1437835 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0528 22:06:47.434000 1437835 retry.go:31] will retry after 178.406611ms: ssh: rejected: connect failed (open failed)
	W0528 22:06:47.434764 1437835 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I0528 22:06:47.434783 1437835 retry.go:31] will retry after 214.510236ms: ssh: rejected: connect failed (open failed)
	W0528 22:06:47.447174 1437835 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0528 22:06:47.447443 1437835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:06:47.447520 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:47.466543 1437835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34297 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/no-preload-930485/id_rsa Username:docker}
	I0528 22:06:47.512651 1437835 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.30.1: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.30.1': No such file or directory
	I0528 22:06:47.512691 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 --> /var/lib/minikube/images/kube-controller-manager_v1.30.1 (28376576 bytes)
	I0528 22:06:47.512777 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:06:47.539789 1437835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34297 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/no-preload-930485/id_rsa Username:docker}
	I0528 22:06:47.617116 1437835 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0528 22:06:47.617216 1437835 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:06:47.617304 1437835 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:06:47.787627 1437835 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 from cache
	I0528 22:06:47.828513 1437835 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0528 22:06:47.828627 1437835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0528 22:06:47.914868 1437835 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0528 22:06:47.914976 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0528 22:06:47.989302 1437835 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0528 22:06:47.989335 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.11.1 | docker load"
	I0528 22:06:47.039109 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:49.039182 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:51.042058 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:49.499923 1437835 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.11.1 | docker load": (1.510564431s)
	I0528 22:06:49.499998 1437835 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0528 22:06:49.500036 1437835 docker.go:304] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.1
	I0528 22:06:49.500065 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.30.1 | docker load"
	I0528 22:06:50.579090 1437835 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.30.1 | docker load": (1.079002687s)
	I0528 22:06:50.579156 1437835 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.1 from cache
	I0528 22:06:50.579189 1437835 docker.go:304] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.1
	I0528 22:06:50.579206 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.30.1 | docker load"
	I0528 22:06:51.558008 1437835 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.1 from cache
	I0528 22:06:51.558120 1437835 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0528 22:06:51.558146 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0528 22:06:51.821572 1437835 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0528 22:06:51.821658 1437835 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0528 22:06:51.821689 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.12-0 | docker load"
	I0528 22:06:53.770097 1437835 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.12-0 | docker load": (1.948377793s)
	I0528 22:06:53.770124 1437835 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0528 22:06:53.770143 1437835 docker.go:304] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.1
	I0528 22:06:53.770155 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.30.1 | docker load"
	I0528 22:06:53.539478 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:55.540506 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:55.033873 1437835 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.30.1 | docker load": (1.263692243s)
	I0528 22:06:55.033899 1437835 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.1 from cache
	I0528 22:06:55.033932 1437835 docker.go:304] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.1
	I0528 22:06:55.033942 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.30.1 | docker load"
	I0528 22:06:55.874750 1437835 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.1 from cache
	I0528 22:06:55.874787 1437835 cache_images.go:123] Successfully loaded all cached images
	I0528 22:06:55.874793 1437835 cache_images.go:92] duration metric: took 8.962154398s to LoadCachedImages
	I0528 22:06:55.874813 1437835 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.30.1 docker true true} ...
	I0528 22:06:55.874924 1437835 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-930485 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:no-preload-930485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0528 22:06:55.874999 1437835 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0528 22:06:55.927898 1437835 cni.go:84] Creating CNI manager for ""
	I0528 22:06:55.927921 1437835 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 22:06:55.927944 1437835 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0528 22:06:55.927964 1437835 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-930485 NodeName:no-preload-930485 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0528 22:06:55.928113 1437835 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-930485"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0528 22:06:55.928182 1437835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0528 22:06:55.937222 1437835 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0528 22:06:55.937339 1437835 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0528 22:06:55.946209 1437835 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/arm64/kubectl.sha256
	I0528 22:06:55.946300 1437835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0528 22:06:55.947425 1437835 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/linux/arm64/v1.30.1/kubelet
	I0528 22:06:55.948203 1437835 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/linux/arm64/v1.30.1/kubeadm
	I0528 22:06:55.949968 1437835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0528 22:06:55.949994 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/linux/arm64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (49938584 bytes)
	I0528 22:06:58.039455 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:00.096705 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:06:59.022628 1437835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0528 22:06:59.027071 1437835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0528 22:06:59.027114 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/linux/arm64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (48955544 bytes)
	I0528 22:07:01.761976 1437835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 22:07:01.774073 1437835 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0528 22:07:01.778007 1437835 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0528 22:07:01.778084 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/linux/arm64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (96446456 bytes)
	I0528 22:07:02.382157 1437835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0528 22:07:02.391737 1437835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0528 22:07:02.411804 1437835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0528 22:07:02.431098 1437835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0528 22:07:02.449656 1437835 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0528 22:07:02.453357 1437835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0528 22:07:02.464329 1437835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:07:02.545945 1437835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:07:02.565313 1437835 certs.go:68] Setting up /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485 for IP: 192.168.76.2
	I0528 22:07:02.565336 1437835 certs.go:194] generating shared ca certs ...
	I0528 22:07:02.565353 1437835 certs.go:226] acquiring lock for ca certs: {Name:mk5cb73d5e2c9c3b65010257baa77ed890ffd0a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:07:02.565487 1437835 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.key
	I0528 22:07:02.565536 1437835 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.key
	I0528 22:07:02.565548 1437835 certs.go:256] generating profile certs ...
	I0528 22:07:02.565606 1437835 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/client.key
	I0528 22:07:02.565625 1437835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/client.crt with IP's: []
	I0528 22:07:02.927618 1437835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/client.crt ...
	I0528 22:07:02.927652 1437835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/client.crt: {Name:mk835f4f5578f7c855e54973873bd7d8eadc0eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:07:02.928463 1437835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/client.key ...
	I0528 22:07:02.928480 1437835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/client.key: {Name:mka9e47148fb722177132f1d047e4d937c595715 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:07:02.928583 1437835 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.key.a831fb3c
	I0528 22:07:02.928600 1437835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.crt.a831fb3c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0528 22:07:03.613146 1437835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.crt.a831fb3c ...
	I0528 22:07:03.613179 1437835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.crt.a831fb3c: {Name:mk8e283d2920cc232bc7ec3799c8dcb8f4582770 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:07:03.613373 1437835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.key.a831fb3c ...
	I0528 22:07:03.613389 1437835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.key.a831fb3c: {Name:mkf540d6f0887265a2968f99ca42f901768d0268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:07:03.613484 1437835 certs.go:381] copying /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.crt.a831fb3c -> /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.crt
	I0528 22:07:03.613569 1437835 certs.go:385] copying /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.key.a831fb3c -> /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.key
	I0528 22:07:03.613642 1437835 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/proxy-client.key
	I0528 22:07:03.613661 1437835 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/proxy-client.crt with IP's: []
	I0528 22:07:03.991779 1437835 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/proxy-client.crt ...
	I0528 22:07:03.991856 1437835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/proxy-client.crt: {Name:mk889f1b28a853cf21465ec59a4e2ff5a549e003 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:07:03.992079 1437835 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/proxy-client.key ...
	I0528 22:07:03.992122 1437835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/proxy-client.key: {Name:mk7bc322c1679d9c785a7ad784e94f283b16b666 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:07:03.993006 1437835 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/1070309.pem (1338 bytes)
	W0528 22:07:03.993085 1437835 certs.go:480] ignoring /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/1070309_empty.pem, impossibly tiny 0 bytes
	I0528 22:07:03.993113 1437835 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca-key.pem (1675 bytes)
	I0528 22:07:03.993165 1437835 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/ca.pem (1078 bytes)
	I0528 22:07:03.993216 1437835 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/cert.pem (1123 bytes)
	I0528 22:07:03.993268 1437835 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/key.pem (1679 bytes)
	I0528 22:07:03.993336 1437835 certs.go:484] found cert: /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem (1708 bytes)
	I0528 22:07:03.993970 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0528 22:07:04.022762 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0528 22:07:04.055920 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0528 22:07:04.082349 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0528 22:07:04.108318 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0528 22:07:04.133491 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0528 22:07:04.158181 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0528 22:07:04.183095 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/no-preload-930485/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0528 22:07:04.207341 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/certs/1070309.pem --> /usr/share/ca-certificates/1070309.pem (1338 bytes)
	I0528 22:07:04.236433 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/ssl/certs/10703092.pem --> /usr/share/ca-certificates/10703092.pem (1708 bytes)
	I0528 22:07:04.261610 1437835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0528 22:07:04.290698 1437835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0528 22:07:04.310573 1437835 ssh_runner.go:195] Run: openssl version
	I0528 22:07:04.317874 1437835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1070309.pem && ln -fs /usr/share/ca-certificates/1070309.pem /etc/ssl/certs/1070309.pem"
	I0528 22:07:04.328039 1437835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1070309.pem
	I0528 22:07:04.331683 1437835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 28 21:04 /usr/share/ca-certificates/1070309.pem
	I0528 22:07:04.331748 1437835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1070309.pem
	I0528 22:07:04.338950 1437835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1070309.pem /etc/ssl/certs/51391683.0"
	I0528 22:07:04.348413 1437835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10703092.pem && ln -fs /usr/share/ca-certificates/10703092.pem /etc/ssl/certs/10703092.pem"
	I0528 22:07:04.363197 1437835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10703092.pem
	I0528 22:07:04.367630 1437835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 28 21:04 /usr/share/ca-certificates/10703092.pem
	I0528 22:07:04.367699 1437835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10703092.pem
	I0528 22:07:04.374793 1437835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/10703092.pem /etc/ssl/certs/3ec20f2e.0"
	I0528 22:07:04.383921 1437835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0528 22:07:04.393594 1437835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:07:04.397381 1437835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 28 20:57 /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:07:04.397490 1437835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0528 22:07:04.404805 1437835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0528 22:07:04.414626 1437835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0528 22:07:04.417951 1437835 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0528 22:07:04.418007 1437835 kubeadm.go:391] StartCluster: {Name:no-preload-930485 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-930485 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 22:07:04.418211 1437835 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0528 22:07:04.444255 1437835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0528 22:07:04.453699 1437835 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0528 22:07:04.462785 1437835 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0528 22:07:04.462857 1437835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0528 22:07:04.473638 1437835 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0528 22:07:04.473706 1437835 kubeadm.go:156] found existing configuration files:
	
	I0528 22:07:04.473767 1437835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0528 22:07:04.484131 1437835 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0528 22:07:04.484202 1437835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0528 22:07:04.494917 1437835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0528 22:07:04.504922 1437835 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0528 22:07:04.504998 1437835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0528 22:07:04.514780 1437835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0528 22:07:04.523959 1437835 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0528 22:07:04.524079 1437835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0528 22:07:04.533056 1437835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0528 22:07:04.545627 1437835 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0528 22:07:04.545714 1437835 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0528 22:07:04.554597 1437835 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0528 22:07:04.607866 1437835 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0528 22:07:04.608161 1437835 kubeadm.go:309] [preflight] Running pre-flight checks
	I0528 22:07:04.663818 1437835 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0528 22:07:04.663970 1437835 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1062-aws
	I0528 22:07:04.664035 1437835 kubeadm.go:309] OS: Linux
	I0528 22:07:04.664116 1437835 kubeadm.go:309] CGROUPS_CPU: enabled
	I0528 22:07:04.664195 1437835 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0528 22:07:04.664264 1437835 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0528 22:07:04.664338 1437835 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0528 22:07:04.664408 1437835 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0528 22:07:04.664502 1437835 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0528 22:07:04.664588 1437835 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0528 22:07:04.664654 1437835 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0528 22:07:04.664729 1437835 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0528 22:07:04.732349 1437835 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0528 22:07:04.732481 1437835 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0528 22:07:04.732617 1437835 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0528 22:07:05.016834 1437835 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0528 22:07:02.540370 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:04.540449 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:05.020161 1437835 out.go:204]   - Generating certificates and keys ...
	I0528 22:07:05.020375 1437835 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0528 22:07:05.020486 1437835 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0528 22:07:05.541564 1437835 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0528 22:07:05.827333 1437835 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0528 22:07:06.058429 1437835 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0528 22:07:06.596319 1437835 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0528 22:07:07.113873 1437835 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0528 22:07:07.114265 1437835 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-930485] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0528 22:07:07.645743 1437835 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0528 22:07:07.646104 1437835 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-930485] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0528 22:07:07.976345 1437835 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0528 22:07:08.243815 1437835 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0528 22:07:08.735602 1437835 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0528 22:07:08.735965 1437835 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0528 22:07:09.413625 1437835 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0528 22:07:09.986316 1437835 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0528 22:07:10.282100 1437835 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0528 22:07:10.549492 1437835 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0528 22:07:10.849990 1437835 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0528 22:07:10.850709 1437835 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0528 22:07:10.855130 1437835 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0528 22:07:07.041287 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:09.045385 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:11.539903 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:10.857384 1437835 out.go:204]   - Booting up control plane ...
	I0528 22:07:10.857483 1437835 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0528 22:07:10.857559 1437835 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0528 22:07:10.858154 1437835 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0528 22:07:10.870389 1437835 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0528 22:07:10.871277 1437835 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0528 22:07:10.871518 1437835 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0528 22:07:10.981897 1437835 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0528 22:07:10.981998 1437835 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0528 22:07:11.983127 1437835 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001515101s
	I0528 22:07:11.983225 1437835 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0528 22:07:14.043307 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:16.540166 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:19.484388 1437835 kubeadm.go:309] [api-check] The API server is healthy after 7.501301304s
	I0528 22:07:19.505012 1437835 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0528 22:07:19.523782 1437835 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0528 22:07:19.556148 1437835 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0528 22:07:19.556577 1437835 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-930485 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0528 22:07:19.574336 1437835 kubeadm.go:309] [bootstrap-token] Using token: iz9azi.ye78vz6r7u6u6z6l
	I0528 22:07:19.576525 1437835 out.go:204]   - Configuring RBAC rules ...
	I0528 22:07:19.576645 1437835 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0528 22:07:19.582185 1437835 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0528 22:07:19.592975 1437835 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0528 22:07:19.596759 1437835 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0528 22:07:19.601352 1437835 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0528 22:07:19.605952 1437835 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0528 22:07:19.891481 1437835 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0528 22:07:20.325248 1437835 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0528 22:07:20.892056 1437835 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0528 22:07:20.893874 1437835 kubeadm.go:309] 
	I0528 22:07:20.893950 1437835 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0528 22:07:20.893960 1437835 kubeadm.go:309] 
	I0528 22:07:20.894068 1437835 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0528 22:07:20.894095 1437835 kubeadm.go:309] 
	I0528 22:07:20.894131 1437835 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0528 22:07:20.894190 1437835 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0528 22:07:20.894239 1437835 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0528 22:07:20.894244 1437835 kubeadm.go:309] 
	I0528 22:07:20.894296 1437835 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0528 22:07:20.894300 1437835 kubeadm.go:309] 
	I0528 22:07:20.894346 1437835 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0528 22:07:20.894351 1437835 kubeadm.go:309] 
	I0528 22:07:20.894401 1437835 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0528 22:07:20.894473 1437835 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0528 22:07:20.894538 1437835 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0528 22:07:20.894542 1437835 kubeadm.go:309] 
	I0528 22:07:20.894624 1437835 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0528 22:07:20.894698 1437835 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0528 22:07:20.894702 1437835 kubeadm.go:309] 
	I0528 22:07:20.894783 1437835 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token iz9azi.ye78vz6r7u6u6z6l \
	I0528 22:07:20.894896 1437835 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c97b1399726f8cdd7302e82f74f094a89f23c332ff3aba8bc1ca69a66ac31365 \
	I0528 22:07:20.894918 1437835 kubeadm.go:309] 	--control-plane 
	I0528 22:07:20.894922 1437835 kubeadm.go:309] 
	I0528 22:07:20.895003 1437835 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0528 22:07:20.895013 1437835 kubeadm.go:309] 
	I0528 22:07:20.895092 1437835 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token iz9azi.ye78vz6r7u6u6z6l \
	I0528 22:07:20.895191 1437835 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:c97b1399726f8cdd7302e82f74f094a89f23c332ff3aba8bc1ca69a66ac31365 
	I0528 22:07:20.899480 1437835 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1062-aws\n", err: exit status 1
	I0528 22:07:20.899598 1437835 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0528 22:07:20.899618 1437835 cni.go:84] Creating CNI manager for ""
	I0528 22:07:20.899634 1437835 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 22:07:20.903245 1437835 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0528 22:07:19.040005 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:21.041417 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:20.905018 1437835 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0528 22:07:20.927277 1437835 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0528 22:07:20.949488 1437835 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0528 22:07:20.949632 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:20.949719 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-930485 minikube.k8s.io/updated_at=2024_05_28T22_07_20_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82 minikube.k8s.io/name=no-preload-930485 minikube.k8s.io/primary=true
	I0528 22:07:20.959670 1437835 ops.go:34] apiserver oom_adj: -16
	I0528 22:07:21.095235 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:21.595849 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:22.095344 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:22.595886 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:23.095345 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:23.595357 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:23.539265 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:25.539509 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:24.096284 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:24.595528 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:25.096133 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:25.595965 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:26.095845 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:26.596248 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:27.095834 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:27.595393 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:28.095551 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:28.595287 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:27.552261 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:30.040734 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:29.095550 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:29.595849 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:30.095405 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:30.596330 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:31.095938 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:31.596309 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:32.096059 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:32.596048 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:33.096169 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:33.595888 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:34.095828 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:34.595882 1437835 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0528 22:07:34.707878 1437835 kubeadm.go:1107] duration metric: took 13.758296149s to wait for elevateKubeSystemPrivileges
	W0528 22:07:34.707917 1437835 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0528 22:07:34.707925 1437835 kubeadm.go:393] duration metric: took 30.28992675s to StartCluster
	I0528 22:07:34.707941 1437835 settings.go:142] acquiring lock: {Name:mk9dd4e0f1e49f25e638e0ae0a582e344ec1255d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:07:34.708005 1437835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 22:07:34.709131 1437835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18966-1064873/kubeconfig: {Name:mk43b4b38c110ff2ffbd3a6de61be9ad6b977a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0528 22:07:34.709335 1437835 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0528 22:07:34.711789 1437835 out.go:177] * Verifying Kubernetes components...
	I0528 22:07:34.709443 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0528 22:07:34.709600 1437835 config.go:182] Loaded profile config "no-preload-930485": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 22:07:34.709609 1437835 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0528 22:07:34.713648 1437835 addons.go:69] Setting storage-provisioner=true in profile "no-preload-930485"
	I0528 22:07:34.713675 1437835 addons.go:234] Setting addon storage-provisioner=true in "no-preload-930485"
	I0528 22:07:34.713712 1437835 host.go:66] Checking if "no-preload-930485" exists ...
	I0528 22:07:34.714229 1437835 cli_runner.go:164] Run: docker container inspect no-preload-930485 --format={{.State.Status}}
	I0528 22:07:34.714458 1437835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0528 22:07:34.714528 1437835 addons.go:69] Setting default-storageclass=true in profile "no-preload-930485"
	I0528 22:07:34.714548 1437835 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-930485"
	I0528 22:07:34.714761 1437835 cli_runner.go:164] Run: docker container inspect no-preload-930485 --format={{.State.Status}}
	I0528 22:07:34.744617 1437835 addons.go:234] Setting addon default-storageclass=true in "no-preload-930485"
	I0528 22:07:34.744656 1437835 host.go:66] Checking if "no-preload-930485" exists ...
	I0528 22:07:34.745056 1437835 cli_runner.go:164] Run: docker container inspect no-preload-930485 --format={{.State.Status}}
	I0528 22:07:34.761414 1437835 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0528 22:07:34.763811 1437835 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:07:34.763834 1437835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0528 22:07:34.763902 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:07:34.780823 1437835 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0528 22:07:34.780844 1437835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0528 22:07:34.780992 1437835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-930485
	I0528 22:07:34.799435 1437835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34297 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/no-preload-930485/id_rsa Username:docker}
	I0528 22:07:34.811967 1437835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34297 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/no-preload-930485/id_rsa Username:docker}
	I0528 22:07:34.980815 1437835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0528 22:07:35.032183 1437835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0528 22:07:35.063292 1437835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0528 22:07:35.063499 1437835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0528 22:07:36.054039 1437835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.07317049s)
	I0528 22:07:36.054147 1437835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.021943453s)
	I0528 22:07:36.054530 1437835 start.go:946] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0528 22:07:36.057089 1437835 node_ready.go:35] waiting up to 6m0s for node "no-preload-930485" to be "Ready" ...
	I0528 22:07:36.097927 1437835 node_ready.go:49] node "no-preload-930485" has status "Ready":"True"
	I0528 22:07:36.098695 1437835 node_ready.go:38] duration metric: took 41.5256ms for node "no-preload-930485" to be "Ready" ...
	I0528 22:07:36.098763 1437835 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:07:36.117161 1437835 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0528 22:07:32.540231 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:35.041370 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:36.120804 1437835 addons.go:510] duration metric: took 1.411184883s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0528 22:07:36.123203 1437835 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kvx2g" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:36.566388 1437835 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-930485" context rescaled to 1 replicas
	I0528 22:07:38.129279 1437835 pod_ready.go:102] pod "coredns-7db6d8ff4d-kvx2g" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:37.043908 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:39.540662 1429762 pod_ready.go:102] pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:41.539183 1429762 pod_ready.go:81] duration metric: took 4m0.006054794s for pod "metrics-server-9975d5f86-5vgg7" in "kube-system" namespace to be "Ready" ...
	E0528 22:07:41.539211 1429762 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0528 22:07:41.539221 1429762 pod_ready.go:38] duration metric: took 5m25.879281664s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:07:41.539241 1429762 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:07:41.539325 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0528 22:07:41.581396 1429762 logs.go:276] 2 containers: [6f5dbe5b1578 2ca82d3e185e]
	I0528 22:07:41.581485 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0528 22:07:41.612667 1429762 logs.go:276] 2 containers: [ddf1864687ea b3da6daaeceb]
	I0528 22:07:41.612747 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0528 22:07:41.637760 1429762 logs.go:276] 2 containers: [11f08e40d7a4 adc72a271675]
	I0528 22:07:41.637844 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0528 22:07:41.654814 1429762 logs.go:276] 2 containers: [f10e2010fb5d 08f05bfb7e4a]
	I0528 22:07:41.654896 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0528 22:07:41.672603 1429762 logs.go:276] 2 containers: [7072b62ac073 2827d087d0f0]
	I0528 22:07:41.672693 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0528 22:07:41.690423 1429762 logs.go:276] 2 containers: [7f2f40603d14 81b8f1bbcbaa]
	I0528 22:07:41.690517 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0528 22:07:41.706113 1429762 logs.go:276] 0 containers: []
	W0528 22:07:41.706136 1429762 logs.go:278] No container was found matching "kindnet"
	I0528 22:07:41.706193 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0528 22:07:41.725404 1429762 logs.go:276] 1 containers: [34ca6395814f]
	I0528 22:07:41.725530 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0528 22:07:41.751424 1429762 logs.go:276] 2 containers: [62900a727b01 ba1d077e39b4]
	I0528 22:07:41.751458 1429762 logs.go:123] Gathering logs for coredns [adc72a271675] ...
	I0528 22:07:41.751470 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adc72a271675"
	I0528 22:07:41.777601 1429762 logs.go:123] Gathering logs for kube-proxy [2827d087d0f0] ...
	I0528 22:07:41.777630 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2827d087d0f0"
	I0528 22:07:41.801398 1429762 logs.go:123] Gathering logs for container status ...
	I0528 22:07:41.801426 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 22:07:40.130493 1437835 pod_ready.go:102] pod "coredns-7db6d8ff4d-kvx2g" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:42.132312 1437835 pod_ready.go:102] pod "coredns-7db6d8ff4d-kvx2g" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:41.867668 1429762 logs.go:123] Gathering logs for describe nodes ...
	I0528 22:07:41.867701 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 22:07:42.032190 1429762 logs.go:123] Gathering logs for kube-apiserver [2ca82d3e185e] ...
	I0528 22:07:42.032220 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ca82d3e185e"
	I0528 22:07:42.101067 1429762 logs.go:123] Gathering logs for etcd [ddf1864687ea] ...
	I0528 22:07:42.101110 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf1864687ea"
	I0528 22:07:42.138547 1429762 logs.go:123] Gathering logs for kube-proxy [7072b62ac073] ...
	I0528 22:07:42.138580 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7072b62ac073"
	I0528 22:07:42.165445 1429762 logs.go:123] Gathering logs for kube-controller-manager [81b8f1bbcbaa] ...
	I0528 22:07:42.165480 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b8f1bbcbaa"
	I0528 22:07:42.213800 1429762 logs.go:123] Gathering logs for kubernetes-dashboard [34ca6395814f] ...
	I0528 22:07:42.213839 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ca6395814f"
	I0528 22:07:42.241229 1429762 logs.go:123] Gathering logs for storage-provisioner [62900a727b01] ...
	I0528 22:07:42.241261 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62900a727b01"
	I0528 22:07:42.278401 1429762 logs.go:123] Gathering logs for kube-apiserver [6f5dbe5b1578] ...
	I0528 22:07:42.278434 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f5dbe5b1578"
	I0528 22:07:42.324852 1429762 logs.go:123] Gathering logs for etcd [b3da6daaeceb] ...
	I0528 22:07:42.324888 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3da6daaeceb"
	I0528 22:07:42.355992 1429762 logs.go:123] Gathering logs for kube-scheduler [08f05bfb7e4a] ...
	I0528 22:07:42.356026 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08f05bfb7e4a"
	I0528 22:07:42.396234 1429762 logs.go:123] Gathering logs for storage-provisioner [ba1d077e39b4] ...
	I0528 22:07:42.396265 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1d077e39b4"
	I0528 22:07:42.423594 1429762 logs.go:123] Gathering logs for Docker ...
	I0528 22:07:42.423620 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0528 22:07:42.452281 1429762 logs.go:123] Gathering logs for coredns [11f08e40d7a4] ...
	I0528 22:07:42.452313 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f08e40d7a4"
	I0528 22:07:42.473796 1429762 logs.go:123] Gathering logs for kube-scheduler [f10e2010fb5d] ...
	I0528 22:07:42.473829 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f10e2010fb5d"
	I0528 22:07:42.498605 1429762 logs.go:123] Gathering logs for kube-controller-manager [7f2f40603d14] ...
	I0528 22:07:42.498638 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2f40603d14"
	I0528 22:07:42.558280 1429762 logs.go:123] Gathering logs for kubelet ...
	I0528 22:07:42.558353 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 22:07:42.621988 1429762 logs.go:138] Found kubelet problem: May 28 22:02:15 old-k8s-version-292036 kubelet[1208]: E0528 22:02:15.566487    1208 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-292036" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-292036' and this object
	W0528 22:07:42.622280 1429762 logs.go:138] Found kubelet problem: May 28 22:02:15 old-k8s-version-292036 kubelet[1208]: E0528 22:02:15.568083    1208 reflector.go:138] object-"kube-system"/"coredns-token-8brxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-8brxc" is forbidden: User "system:node:old-k8s-version-292036" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-292036' and this object
	W0528 22:07:42.628522 1429762 logs.go:138] Found kubelet problem: May 28 22:02:17 old-k8s-version-292036 kubelet[1208]: E0528 22:02:17.280439    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:42.629620 1429762 logs.go:138] Found kubelet problem: May 28 22:02:18 old-k8s-version-292036 kubelet[1208]: E0528 22:02:18.011832    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.630319 1429762 logs.go:138] Found kubelet problem: May 28 22:02:19 old-k8s-version-292036 kubelet[1208]: E0528 22:02:19.075777    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.633832 1429762 logs.go:138] Found kubelet problem: May 28 22:02:33 old-k8s-version-292036 kubelet[1208]: E0528 22:02:33.317495    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:42.638383 1429762 logs.go:138] Found kubelet problem: May 28 22:02:39 old-k8s-version-292036 kubelet[1208]: E0528 22:02:39.084652    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:42.638876 1429762 logs.go:138] Found kubelet problem: May 28 22:02:39 old-k8s-version-292036 kubelet[1208]: E0528 22:02:39.357029    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.639474 1429762 logs.go:138] Found kubelet problem: May 28 22:02:47 old-k8s-version-292036 kubelet[1208]: E0528 22:02:47.254773    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.639953 1429762 logs.go:138] Found kubelet problem: May 28 22:02:48 old-k8s-version-292036 kubelet[1208]: E0528 22:02:48.433142    1208 pod_workers.go:191] Error syncing pod a526d13b-5979-4e3b-9a89-a95a00b5e5ee ("storage-provisioner_kube-system(a526d13b-5979-4e3b-9a89-a95a00b5e5ee)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a526d13b-5979-4e3b-9a89-a95a00b5e5ee)"
	W0528 22:07:42.642392 1429762 logs.go:138] Found kubelet problem: May 28 22:02:50 old-k8s-version-292036 kubelet[1208]: E0528 22:02:50.715710    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:42.645243 1429762 logs.go:138] Found kubelet problem: May 28 22:03:00 old-k8s-version-292036 kubelet[1208]: E0528 22:03:00.347065    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:42.645612 1429762 logs.go:138] Found kubelet problem: May 28 22:03:05 old-k8s-version-292036 kubelet[1208]: E0528 22:03:05.254675    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.645810 1429762 logs.go:138] Found kubelet problem: May 28 22:03:13 old-k8s-version-292036 kubelet[1208]: E0528 22:03:13.255557    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.648276 1429762 logs.go:138] Found kubelet problem: May 28 22:03:20 old-k8s-version-292036 kubelet[1208]: E0528 22:03:20.709611    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:42.648485 1429762 logs.go:138] Found kubelet problem: May 28 22:03:28 old-k8s-version-292036 kubelet[1208]: E0528 22:03:28.256445    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.648695 1429762 logs.go:138] Found kubelet problem: May 28 22:03:34 old-k8s-version-292036 kubelet[1208]: E0528 22:03:34.263336    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.650902 1429762 logs.go:138] Found kubelet problem: May 28 22:03:42 old-k8s-version-292036 kubelet[1208]: E0528 22:03:42.283007    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:42.651125 1429762 logs.go:138] Found kubelet problem: May 28 22:03:47 old-k8s-version-292036 kubelet[1208]: E0528 22:03:47.255783    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.651318 1429762 logs.go:138] Found kubelet problem: May 28 22:03:53 old-k8s-version-292036 kubelet[1208]: E0528 22:03:53.254587    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.651527 1429762 logs.go:138] Found kubelet problem: May 28 22:03:58 old-k8s-version-292036 kubelet[1208]: E0528 22:03:58.254212    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.651720 1429762 logs.go:138] Found kubelet problem: May 28 22:04:04 old-k8s-version-292036 kubelet[1208]: E0528 22:04:04.254475    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.654056 1429762 logs.go:138] Found kubelet problem: May 28 22:04:10 old-k8s-version-292036 kubelet[1208]: E0528 22:04:10.693145    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:42.654252 1429762 logs.go:138] Found kubelet problem: May 28 22:04:19 old-k8s-version-292036 kubelet[1208]: E0528 22:04:19.254270    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.654456 1429762 logs.go:138] Found kubelet problem: May 28 22:04:24 old-k8s-version-292036 kubelet[1208]: E0528 22:04:24.254178    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.654646 1429762 logs.go:138] Found kubelet problem: May 28 22:04:31 old-k8s-version-292036 kubelet[1208]: E0528 22:04:31.254447    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.654849 1429762 logs.go:138] Found kubelet problem: May 28 22:04:36 old-k8s-version-292036 kubelet[1208]: E0528 22:04:36.263663    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.655056 1429762 logs.go:138] Found kubelet problem: May 28 22:04:45 old-k8s-version-292036 kubelet[1208]: E0528 22:04:45.254653    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.655260 1429762 logs.go:138] Found kubelet problem: May 28 22:04:50 old-k8s-version-292036 kubelet[1208]: E0528 22:04:50.254303    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.655466 1429762 logs.go:138] Found kubelet problem: May 28 22:04:58 old-k8s-version-292036 kubelet[1208]: E0528 22:04:58.254565    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.655683 1429762 logs.go:138] Found kubelet problem: May 28 22:05:01 old-k8s-version-292036 kubelet[1208]: E0528 22:05:01.255269    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.658070 1429762 logs.go:138] Found kubelet problem: May 28 22:05:12 old-k8s-version-292036 kubelet[1208]: E0528 22:05:12.271324    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:42.658316 1429762 logs.go:138] Found kubelet problem: May 28 22:05:16 old-k8s-version-292036 kubelet[1208]: E0528 22:05:16.254290    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.658517 1429762 logs.go:138] Found kubelet problem: May 28 22:05:25 old-k8s-version-292036 kubelet[1208]: E0528 22:05:25.260346    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.658725 1429762 logs.go:138] Found kubelet problem: May 28 22:05:30 old-k8s-version-292036 kubelet[1208]: E0528 22:05:30.254264    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.658914 1429762 logs.go:138] Found kubelet problem: May 28 22:05:38 old-k8s-version-292036 kubelet[1208]: E0528 22:05:38.258801    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.661520 1429762 logs.go:138] Found kubelet problem: May 28 22:05:42 old-k8s-version-292036 kubelet[1208]: E0528 22:05:42.722092    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:42.661736 1429762 logs.go:138] Found kubelet problem: May 28 22:05:50 old-k8s-version-292036 kubelet[1208]: E0528 22:05:50.254810    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.661967 1429762 logs.go:138] Found kubelet problem: May 28 22:05:56 old-k8s-version-292036 kubelet[1208]: E0528 22:05:56.262695    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.662179 1429762 logs.go:138] Found kubelet problem: May 28 22:06:01 old-k8s-version-292036 kubelet[1208]: E0528 22:06:01.254241    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.662413 1429762 logs.go:138] Found kubelet problem: May 28 22:06:09 old-k8s-version-292036 kubelet[1208]: E0528 22:06:09.281534    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.662627 1429762 logs.go:138] Found kubelet problem: May 28 22:06:15 old-k8s-version-292036 kubelet[1208]: E0528 22:06:15.254396    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.662862 1429762 logs.go:138] Found kubelet problem: May 28 22:06:21 old-k8s-version-292036 kubelet[1208]: E0528 22:06:21.257196    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.663061 1429762 logs.go:138] Found kubelet problem: May 28 22:06:30 old-k8s-version-292036 kubelet[1208]: E0528 22:06:30.254217    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.663285 1429762 logs.go:138] Found kubelet problem: May 28 22:06:36 old-k8s-version-292036 kubelet[1208]: E0528 22:06:36.254429    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.663488 1429762 logs.go:138] Found kubelet problem: May 28 22:06:42 old-k8s-version-292036 kubelet[1208]: E0528 22:06:42.254602    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.663709 1429762 logs.go:138] Found kubelet problem: May 28 22:06:50 old-k8s-version-292036 kubelet[1208]: E0528 22:06:50.262748    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.663907 1429762 logs.go:138] Found kubelet problem: May 28 22:06:55 old-k8s-version-292036 kubelet[1208]: E0528 22:06:55.258724    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.664118 1429762 logs.go:138] Found kubelet problem: May 28 22:07:03 old-k8s-version-292036 kubelet[1208]: E0528 22:07:03.262974    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.664352 1429762 logs.go:138] Found kubelet problem: May 28 22:07:10 old-k8s-version-292036 kubelet[1208]: E0528 22:07:10.255540    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.664572 1429762 logs.go:138] Found kubelet problem: May 28 22:07:16 old-k8s-version-292036 kubelet[1208]: E0528 22:07:16.254105    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.664857 1429762 logs.go:138] Found kubelet problem: May 28 22:07:24 old-k8s-version-292036 kubelet[1208]: E0528 22:07:24.254562    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.665090 1429762 logs.go:138] Found kubelet problem: May 28 22:07:27 old-k8s-version-292036 kubelet[1208]: E0528 22:07:27.254400    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.665294 1429762 logs.go:138] Found kubelet problem: May 28 22:07:35 old-k8s-version-292036 kubelet[1208]: E0528 22:07:35.254891    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.665512 1429762 logs.go:138] Found kubelet problem: May 28 22:07:38 old-k8s-version-292036 kubelet[1208]: E0528 22:07:38.254217    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0528 22:07:42.665524 1429762 logs.go:123] Gathering logs for dmesg ...
	I0528 22:07:42.665542 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 22:07:42.688308 1429762 out.go:304] Setting ErrFile to fd 2...
	I0528 22:07:42.688466 1429762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 22:07:42.688535 1429762 out.go:239] X Problems detected in kubelet:
	W0528 22:07:42.688581 1429762 out.go:239]   May 28 22:07:16 old-k8s-version-292036 kubelet[1208]: E0528 22:07:16.254105    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.688622 1429762 out.go:239]   May 28 22:07:24 old-k8s-version-292036 kubelet[1208]: E0528 22:07:24.254562    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.688675 1429762 out.go:239]   May 28 22:07:27 old-k8s-version-292036 kubelet[1208]: E0528 22:07:27.254400    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.688733 1429762 out.go:239]   May 28 22:07:35 old-k8s-version-292036 kubelet[1208]: E0528 22:07:35.254891    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:42.688747 1429762 out.go:239]   May 28 22:07:38 old-k8s-version-292036 kubelet[1208]: E0528 22:07:38.254217    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	I0528 22:07:42.688754 1429762 out.go:304] Setting ErrFile to fd 2...
	I0528 22:07:42.688761 1429762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:07:44.629645 1437835 pod_ready.go:102] pod "coredns-7db6d8ff4d-kvx2g" in "kube-system" namespace has status "Ready":"False"
	I0528 22:07:47.128752 1437835 pod_ready.go:97] pod "coredns-7db6d8ff4d-kvx2g" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-28 22:07:46 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-28 22:07:35 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-28 22:07:35 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-28 22:07:35 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-28 22:07:35 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.76.2 HostIPs:[{IP:192.168.76.2
}] PodIP: PodIPs:[] StartTime:2024-05-28 22:07:35 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-28 22:07:36 +0000 UTC,FinishedAt:2024-05-28 22:07:46 +0000 UTC,ContainerID:docker://dfa38a913d1c10a283d5d594d5493b388b6df7293c3d6e31df4686c6587d987b,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker://sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93 ContainerID:docker://dfa38a913d1c10a283d5d594d5493b388b6df7293c3d6e31df4686c6587d987b Started:0x4001c59190 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0528 22:07:47.128787 1437835 pod_ready.go:81] duration metric: took 11.005498571s for pod "coredns-7db6d8ff4d-kvx2g" in "kube-system" namespace to be "Ready" ...
	E0528 22:07:47.128826 1437835 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-kvx2g" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-28 22:07:46 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-28 22:07:35 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-28 22:07:35 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-28 22:07:35 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-28 22:07:35 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.7
6.2 HostIPs:[{IP:192.168.76.2}] PodIP: PodIPs:[] StartTime:2024-05-28 22:07:35 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-28 22:07:36 +0000 UTC,FinishedAt:2024-05-28 22:07:46 +0000 UTC,ContainerID:docker://dfa38a913d1c10a283d5d594d5493b388b6df7293c3d6e31df4686c6587d987b,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker://sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93 ContainerID:docker://dfa38a913d1c10a283d5d594d5493b388b6df7293c3d6e31df4686c6587d987b Started:0x4001c59190 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0528 22:07:47.128846 1437835 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-spxk7" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.133491 1437835 pod_ready.go:92] pod "coredns-7db6d8ff4d-spxk7" in "kube-system" namespace has status "Ready":"True"
	I0528 22:07:47.133513 1437835 pod_ready.go:81] duration metric: took 4.654377ms for pod "coredns-7db6d8ff4d-spxk7" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.133525 1437835 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-930485" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.138138 1437835 pod_ready.go:92] pod "etcd-no-preload-930485" in "kube-system" namespace has status "Ready":"True"
	I0528 22:07:47.138162 1437835 pod_ready.go:81] duration metric: took 4.630394ms for pod "etcd-no-preload-930485" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.138173 1437835 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-930485" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.143236 1437835 pod_ready.go:92] pod "kube-apiserver-no-preload-930485" in "kube-system" namespace has status "Ready":"True"
	I0528 22:07:47.143260 1437835 pod_ready.go:81] duration metric: took 5.079705ms for pod "kube-apiserver-no-preload-930485" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.143271 1437835 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-930485" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.147842 1437835 pod_ready.go:92] pod "kube-controller-manager-no-preload-930485" in "kube-system" namespace has status "Ready":"True"
	I0528 22:07:47.147865 1437835 pod_ready.go:81] duration metric: took 4.586022ms for pod "kube-controller-manager-no-preload-930485" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.147878 1437835 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jwz64" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.527538 1437835 pod_ready.go:92] pod "kube-proxy-jwz64" in "kube-system" namespace has status "Ready":"True"
	I0528 22:07:47.527566 1437835 pod_ready.go:81] duration metric: took 379.679826ms for pod "kube-proxy-jwz64" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.527579 1437835 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-930485" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.927934 1437835 pod_ready.go:92] pod "kube-scheduler-no-preload-930485" in "kube-system" namespace has status "Ready":"True"
	I0528 22:07:47.927958 1437835 pod_ready.go:81] duration metric: took 400.370898ms for pod "kube-scheduler-no-preload-930485" in "kube-system" namespace to be "Ready" ...
	I0528 22:07:47.927971 1437835 pod_ready.go:38] duration metric: took 11.829181803s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0528 22:07:47.927989 1437835 api_server.go:52] waiting for apiserver process to appear ...
	I0528 22:07:47.928078 1437835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:07:47.944425 1437835 api_server.go:72] duration metric: took 13.235062419s to wait for apiserver process to appear ...
	I0528 22:07:47.944452 1437835 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:07:47.944472 1437835 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0528 22:07:47.952528 1437835 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0528 22:07:47.953680 1437835 api_server.go:141] control plane version: v1.30.1
	I0528 22:07:47.953704 1437835 api_server.go:131] duration metric: took 9.244674ms to wait for apiserver health ...
	I0528 22:07:47.953714 1437835 system_pods.go:43] waiting for kube-system pods to appear ...
	I0528 22:07:48.131501 1437835 system_pods.go:59] 7 kube-system pods found
	I0528 22:07:48.131537 1437835 system_pods.go:61] "coredns-7db6d8ff4d-spxk7" [9d7c5ced-e489-4cfd-8464-8f3a56cc7452] Running
	I0528 22:07:48.131545 1437835 system_pods.go:61] "etcd-no-preload-930485" [48b324e5-40ff-49e1-ab57-a4553b24e902] Running
	I0528 22:07:48.131549 1437835 system_pods.go:61] "kube-apiserver-no-preload-930485" [b16a3dfb-2be1-4142-8ff0-4dee0891b1e3] Running
	I0528 22:07:48.131554 1437835 system_pods.go:61] "kube-controller-manager-no-preload-930485" [d38a1ebd-f3e9-4da5-919b-eb67d2e1861a] Running
	I0528 22:07:48.131558 1437835 system_pods.go:61] "kube-proxy-jwz64" [8e380388-5219-4ef9-b44f-411f91c07e60] Running
	I0528 22:07:48.131562 1437835 system_pods.go:61] "kube-scheduler-no-preload-930485" [584e39c9-98f6-4f13-a35e-87b164f70efe] Running
	I0528 22:07:48.131566 1437835 system_pods.go:61] "storage-provisioner" [9da40f5e-afe0-4fa1-9428-17589c55c4e1] Running
	I0528 22:07:48.131571 1437835 system_pods.go:74] duration metric: took 177.851534ms to wait for pod list to return data ...
	I0528 22:07:48.131579 1437835 default_sa.go:34] waiting for default service account to be created ...
	I0528 22:07:48.326708 1437835 default_sa.go:45] found service account: "default"
	I0528 22:07:48.326740 1437835 default_sa.go:55] duration metric: took 195.153332ms for default service account to be created ...
	I0528 22:07:48.326752 1437835 system_pods.go:116] waiting for k8s-apps to be running ...
	I0528 22:07:48.529701 1437835 system_pods.go:86] 7 kube-system pods found
	I0528 22:07:48.529737 1437835 system_pods.go:89] "coredns-7db6d8ff4d-spxk7" [9d7c5ced-e489-4cfd-8464-8f3a56cc7452] Running
	I0528 22:07:48.529744 1437835 system_pods.go:89] "etcd-no-preload-930485" [48b324e5-40ff-49e1-ab57-a4553b24e902] Running
	I0528 22:07:48.529749 1437835 system_pods.go:89] "kube-apiserver-no-preload-930485" [b16a3dfb-2be1-4142-8ff0-4dee0891b1e3] Running
	I0528 22:07:48.529754 1437835 system_pods.go:89] "kube-controller-manager-no-preload-930485" [d38a1ebd-f3e9-4da5-919b-eb67d2e1861a] Running
	I0528 22:07:48.529757 1437835 system_pods.go:89] "kube-proxy-jwz64" [8e380388-5219-4ef9-b44f-411f91c07e60] Running
	I0528 22:07:48.529762 1437835 system_pods.go:89] "kube-scheduler-no-preload-930485" [584e39c9-98f6-4f13-a35e-87b164f70efe] Running
	I0528 22:07:48.529766 1437835 system_pods.go:89] "storage-provisioner" [9da40f5e-afe0-4fa1-9428-17589c55c4e1] Running
	I0528 22:07:48.529773 1437835 system_pods.go:126] duration metric: took 203.015522ms to wait for k8s-apps to be running ...
	I0528 22:07:48.529786 1437835 system_svc.go:44] waiting for kubelet service to be running ....
	I0528 22:07:48.529848 1437835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 22:07:48.542172 1437835 system_svc.go:56] duration metric: took 12.375739ms WaitForService to wait for kubelet
	I0528 22:07:48.542201 1437835 kubeadm.go:576] duration metric: took 13.832841991s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0528 22:07:48.542221 1437835 node_conditions.go:102] verifying NodePressure condition ...
	I0528 22:07:48.727383 1437835 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0528 22:07:48.727417 1437835 node_conditions.go:123] node cpu capacity is 2
	I0528 22:07:48.727431 1437835 node_conditions.go:105] duration metric: took 185.204451ms to run NodePressure ...
	I0528 22:07:48.727463 1437835 start.go:240] waiting for startup goroutines ...
	I0528 22:07:48.727478 1437835 start.go:245] waiting for cluster config update ...
	I0528 22:07:48.727489 1437835 start.go:254] writing updated cluster config ...
	I0528 22:07:48.727787 1437835 ssh_runner.go:195] Run: rm -f paused
	I0528 22:07:48.793458 1437835 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0528 22:07:48.795888 1437835 out.go:177] * Done! kubectl is now configured to use "no-preload-930485" cluster and "default" namespace by default
	I0528 22:07:52.690625 1429762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 22:07:52.703193 1429762 api_server.go:72] duration metric: took 5m52.601850338s to wait for apiserver process to appear ...
	I0528 22:07:52.703221 1429762 api_server.go:88] waiting for apiserver healthz status ...
	I0528 22:07:52.703303 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0528 22:07:52.721095 1429762 logs.go:276] 2 containers: [6f5dbe5b1578 2ca82d3e185e]
	I0528 22:07:52.721183 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0528 22:07:52.737993 1429762 logs.go:276] 2 containers: [ddf1864687ea b3da6daaeceb]
	I0528 22:07:52.738130 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0528 22:07:52.754549 1429762 logs.go:276] 2 containers: [11f08e40d7a4 adc72a271675]
	I0528 22:07:52.754636 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0528 22:07:52.771900 1429762 logs.go:276] 2 containers: [f10e2010fb5d 08f05bfb7e4a]
	I0528 22:07:52.771982 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0528 22:07:52.788226 1429762 logs.go:276] 2 containers: [7072b62ac073 2827d087d0f0]
	I0528 22:07:52.788301 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0528 22:07:52.805149 1429762 logs.go:276] 2 containers: [7f2f40603d14 81b8f1bbcbaa]
	I0528 22:07:52.805243 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0528 22:07:52.821459 1429762 logs.go:276] 0 containers: []
	W0528 22:07:52.821486 1429762 logs.go:278] No container was found matching "kindnet"
	I0528 22:07:52.821554 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0528 22:07:52.838787 1429762 logs.go:276] 1 containers: [34ca6395814f]
	I0528 22:07:52.838864 1429762 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0528 22:07:52.867219 1429762 logs.go:276] 2 containers: [62900a727b01 ba1d077e39b4]
	I0528 22:07:52.867291 1429762 logs.go:123] Gathering logs for kube-proxy [2827d087d0f0] ...
	I0528 22:07:52.867319 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2827d087d0f0"
	I0528 22:07:52.889298 1429762 logs.go:123] Gathering logs for kube-controller-manager [81b8f1bbcbaa] ...
	I0528 22:07:52.889329 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81b8f1bbcbaa"
	I0528 22:07:52.936053 1429762 logs.go:123] Gathering logs for kube-scheduler [08f05bfb7e4a] ...
	I0528 22:07:52.936097 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08f05bfb7e4a"
	I0528 22:07:52.970757 1429762 logs.go:123] Gathering logs for kubelet ...
	I0528 22:07:52.970790 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0528 22:07:53.030415 1429762 logs.go:138] Found kubelet problem: May 28 22:02:15 old-k8s-version-292036 kubelet[1208]: E0528 22:02:15.566487    1208 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-292036" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-292036' and this object
	W0528 22:07:53.030649 1429762 logs.go:138] Found kubelet problem: May 28 22:02:15 old-k8s-version-292036 kubelet[1208]: E0528 22:02:15.568083    1208 reflector.go:138] object-"kube-system"/"coredns-token-8brxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-8brxc" is forbidden: User "system:node:old-k8s-version-292036" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-292036' and this object
	W0528 22:07:53.036796 1429762 logs.go:138] Found kubelet problem: May 28 22:02:17 old-k8s-version-292036 kubelet[1208]: E0528 22:02:17.280439    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:53.037876 1429762 logs.go:138] Found kubelet problem: May 28 22:02:18 old-k8s-version-292036 kubelet[1208]: E0528 22:02:18.011832    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.038791 1429762 logs.go:138] Found kubelet problem: May 28 22:02:19 old-k8s-version-292036 kubelet[1208]: E0528 22:02:19.075777    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.041766 1429762 logs.go:138] Found kubelet problem: May 28 22:02:33 old-k8s-version-292036 kubelet[1208]: E0528 22:02:33.317495    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:53.046133 1429762 logs.go:138] Found kubelet problem: May 28 22:02:39 old-k8s-version-292036 kubelet[1208]: E0528 22:02:39.084652    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:53.046520 1429762 logs.go:138] Found kubelet problem: May 28 22:02:39 old-k8s-version-292036 kubelet[1208]: E0528 22:02:39.357029    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.047045 1429762 logs.go:138] Found kubelet problem: May 28 22:02:47 old-k8s-version-292036 kubelet[1208]: E0528 22:02:47.254773    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.047491 1429762 logs.go:138] Found kubelet problem: May 28 22:02:48 old-k8s-version-292036 kubelet[1208]: E0528 22:02:48.433142    1208 pod_workers.go:191] Error syncing pod a526d13b-5979-4e3b-9a89-a95a00b5e5ee ("storage-provisioner_kube-system(a526d13b-5979-4e3b-9a89-a95a00b5e5ee)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a526d13b-5979-4e3b-9a89-a95a00b5e5ee)"
	W0528 22:07:53.049738 1429762 logs.go:138] Found kubelet problem: May 28 22:02:50 old-k8s-version-292036 kubelet[1208]: E0528 22:02:50.715710    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:53.052288 1429762 logs.go:138] Found kubelet problem: May 28 22:03:00 old-k8s-version-292036 kubelet[1208]: E0528 22:03:00.347065    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:53.052629 1429762 logs.go:138] Found kubelet problem: May 28 22:03:05 old-k8s-version-292036 kubelet[1208]: E0528 22:03:05.254675    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.052818 1429762 logs.go:138] Found kubelet problem: May 28 22:03:13 old-k8s-version-292036 kubelet[1208]: E0528 22:03:13.255557    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.055089 1429762 logs.go:138] Found kubelet problem: May 28 22:03:20 old-k8s-version-292036 kubelet[1208]: E0528 22:03:20.709611    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:53.055277 1429762 logs.go:138] Found kubelet problem: May 28 22:03:28 old-k8s-version-292036 kubelet[1208]: E0528 22:03:28.256445    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.055476 1429762 logs.go:138] Found kubelet problem: May 28 22:03:34 old-k8s-version-292036 kubelet[1208]: E0528 22:03:34.263336    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.057559 1429762 logs.go:138] Found kubelet problem: May 28 22:03:42 old-k8s-version-292036 kubelet[1208]: E0528 22:03:42.283007    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:53.057757 1429762 logs.go:138] Found kubelet problem: May 28 22:03:47 old-k8s-version-292036 kubelet[1208]: E0528 22:03:47.255783    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.057945 1429762 logs.go:138] Found kubelet problem: May 28 22:03:53 old-k8s-version-292036 kubelet[1208]: E0528 22:03:53.254587    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.058152 1429762 logs.go:138] Found kubelet problem: May 28 22:03:58 old-k8s-version-292036 kubelet[1208]: E0528 22:03:58.254212    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.058340 1429762 logs.go:138] Found kubelet problem: May 28 22:04:04 old-k8s-version-292036 kubelet[1208]: E0528 22:04:04.254475    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.060596 1429762 logs.go:138] Found kubelet problem: May 28 22:04:10 old-k8s-version-292036 kubelet[1208]: E0528 22:04:10.693145    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:53.060783 1429762 logs.go:138] Found kubelet problem: May 28 22:04:19 old-k8s-version-292036 kubelet[1208]: E0528 22:04:19.254270    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.060981 1429762 logs.go:138] Found kubelet problem: May 28 22:04:24 old-k8s-version-292036 kubelet[1208]: E0528 22:04:24.254178    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.061168 1429762 logs.go:138] Found kubelet problem: May 28 22:04:31 old-k8s-version-292036 kubelet[1208]: E0528 22:04:31.254447    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.061370 1429762 logs.go:138] Found kubelet problem: May 28 22:04:36 old-k8s-version-292036 kubelet[1208]: E0528 22:04:36.263663    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.061556 1429762 logs.go:138] Found kubelet problem: May 28 22:04:45 old-k8s-version-292036 kubelet[1208]: E0528 22:04:45.254653    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.061753 1429762 logs.go:138] Found kubelet problem: May 28 22:04:50 old-k8s-version-292036 kubelet[1208]: E0528 22:04:50.254303    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.061937 1429762 logs.go:138] Found kubelet problem: May 28 22:04:58 old-k8s-version-292036 kubelet[1208]: E0528 22:04:58.254565    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.062144 1429762 logs.go:138] Found kubelet problem: May 28 22:05:01 old-k8s-version-292036 kubelet[1208]: E0528 22:05:01.255269    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.064248 1429762 logs.go:138] Found kubelet problem: May 28 22:05:12 old-k8s-version-292036 kubelet[1208]: E0528 22:05:12.271324    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0528 22:07:53.064452 1429762 logs.go:138] Found kubelet problem: May 28 22:05:16 old-k8s-version-292036 kubelet[1208]: E0528 22:05:16.254290    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.064642 1429762 logs.go:138] Found kubelet problem: May 28 22:05:25 old-k8s-version-292036 kubelet[1208]: E0528 22:05:25.260346    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.064840 1429762 logs.go:138] Found kubelet problem: May 28 22:05:30 old-k8s-version-292036 kubelet[1208]: E0528 22:05:30.254264    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.065028 1429762 logs.go:138] Found kubelet problem: May 28 22:05:38 old-k8s-version-292036 kubelet[1208]: E0528 22:05:38.258801    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.067327 1429762 logs.go:138] Found kubelet problem: May 28 22:05:42 old-k8s-version-292036 kubelet[1208]: E0528 22:05:42.722092    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0528 22:07:53.067517 1429762 logs.go:138] Found kubelet problem: May 28 22:05:50 old-k8s-version-292036 kubelet[1208]: E0528 22:05:50.254810    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.067715 1429762 logs.go:138] Found kubelet problem: May 28 22:05:56 old-k8s-version-292036 kubelet[1208]: E0528 22:05:56.262695    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.067905 1429762 logs.go:138] Found kubelet problem: May 28 22:06:01 old-k8s-version-292036 kubelet[1208]: E0528 22:06:01.254241    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.068110 1429762 logs.go:138] Found kubelet problem: May 28 22:06:09 old-k8s-version-292036 kubelet[1208]: E0528 22:06:09.281534    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.068295 1429762 logs.go:138] Found kubelet problem: May 28 22:06:15 old-k8s-version-292036 kubelet[1208]: E0528 22:06:15.254396    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.068493 1429762 logs.go:138] Found kubelet problem: May 28 22:06:21 old-k8s-version-292036 kubelet[1208]: E0528 22:06:21.257196    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.068679 1429762 logs.go:138] Found kubelet problem: May 28 22:06:30 old-k8s-version-292036 kubelet[1208]: E0528 22:06:30.254217    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.068878 1429762 logs.go:138] Found kubelet problem: May 28 22:06:36 old-k8s-version-292036 kubelet[1208]: E0528 22:06:36.254429    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.069068 1429762 logs.go:138] Found kubelet problem: May 28 22:06:42 old-k8s-version-292036 kubelet[1208]: E0528 22:06:42.254602    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.069265 1429762 logs.go:138] Found kubelet problem: May 28 22:06:50 old-k8s-version-292036 kubelet[1208]: E0528 22:06:50.262748    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.069452 1429762 logs.go:138] Found kubelet problem: May 28 22:06:55 old-k8s-version-292036 kubelet[1208]: E0528 22:06:55.258724    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.069649 1429762 logs.go:138] Found kubelet problem: May 28 22:07:03 old-k8s-version-292036 kubelet[1208]: E0528 22:07:03.262974    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.069835 1429762 logs.go:138] Found kubelet problem: May 28 22:07:10 old-k8s-version-292036 kubelet[1208]: E0528 22:07:10.255540    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070040 1429762 logs.go:138] Found kubelet problem: May 28 22:07:16 old-k8s-version-292036 kubelet[1208]: E0528 22:07:16.254105    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070230 1429762 logs.go:138] Found kubelet problem: May 28 22:07:24 old-k8s-version-292036 kubelet[1208]: E0528 22:07:24.254562    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070429 1429762 logs.go:138] Found kubelet problem: May 28 22:07:27 old-k8s-version-292036 kubelet[1208]: E0528 22:07:27.254400    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070614 1429762 logs.go:138] Found kubelet problem: May 28 22:07:35 old-k8s-version-292036 kubelet[1208]: E0528 22:07:35.254891    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070813 1429762 logs.go:138] Found kubelet problem: May 28 22:07:38 old-k8s-version-292036 kubelet[1208]: E0528 22:07:38.254217    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.070998 1429762 logs.go:138] Found kubelet problem: May 28 22:07:49 old-k8s-version-292036 kubelet[1208]: E0528 22:07:49.262987    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0528 22:07:53.071008 1429762 logs.go:123] Gathering logs for kube-apiserver [6f5dbe5b1578] ...
	I0528 22:07:53.071023 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f5dbe5b1578"
	I0528 22:07:53.128865 1429762 logs.go:123] Gathering logs for etcd [ddf1864687ea] ...
	I0528 22:07:53.128895 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ddf1864687ea"
	I0528 22:07:53.154934 1429762 logs.go:123] Gathering logs for etcd [b3da6daaeceb] ...
	I0528 22:07:53.154967 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3da6daaeceb"
	I0528 22:07:53.178801 1429762 logs.go:123] Gathering logs for coredns [11f08e40d7a4] ...
	I0528 22:07:53.178830 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f08e40d7a4"
	I0528 22:07:53.199635 1429762 logs.go:123] Gathering logs for coredns [adc72a271675] ...
	I0528 22:07:53.199666 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adc72a271675"
	I0528 22:07:53.222277 1429762 logs.go:123] Gathering logs for kube-scheduler [f10e2010fb5d] ...
	I0528 22:07:53.222306 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f10e2010fb5d"
	I0528 22:07:53.248483 1429762 logs.go:123] Gathering logs for kube-controller-manager [7f2f40603d14] ...
	I0528 22:07:53.248512 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7f2f40603d14"
	I0528 22:07:53.304485 1429762 logs.go:123] Gathering logs for kubernetes-dashboard [34ca6395814f] ...
	I0528 22:07:53.304517 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34ca6395814f"
	I0528 22:07:53.328266 1429762 logs.go:123] Gathering logs for dmesg ...
	I0528 22:07:53.328297 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0528 22:07:53.349197 1429762 logs.go:123] Gathering logs for kube-apiserver [2ca82d3e185e] ...
	I0528 22:07:53.349227 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ca82d3e185e"
	I0528 22:07:53.434007 1429762 logs.go:123] Gathering logs for kube-proxy [7072b62ac073] ...
	I0528 22:07:53.434055 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7072b62ac073"
	I0528 22:07:53.459969 1429762 logs.go:123] Gathering logs for storage-provisioner [62900a727b01] ...
	I0528 22:07:53.460000 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62900a727b01"
	I0528 22:07:53.479881 1429762 logs.go:123] Gathering logs for storage-provisioner [ba1d077e39b4] ...
	I0528 22:07:53.479912 1429762 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ba1d077e39b4"
	I0528 22:07:53.501261 1429762 logs.go:123] Gathering logs for Docker ...
	I0528 22:07:53.501338 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0528 22:07:53.533031 1429762 logs.go:123] Gathering logs for describe nodes ...
	I0528 22:07:53.533065 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0528 22:07:53.679809 1429762 logs.go:123] Gathering logs for container status ...
	I0528 22:07:53.679841 1429762 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0528 22:07:53.733270 1429762 out.go:304] Setting ErrFile to fd 2...
	I0528 22:07:53.733296 1429762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0528 22:07:53.733350 1429762 out.go:239] X Problems detected in kubelet:
	W0528 22:07:53.733365 1429762 out.go:239]   May 28 22:07:24 old-k8s-version-292036 kubelet[1208]: E0528 22:07:24.254562    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.733374 1429762 out.go:239]   May 28 22:07:27 old-k8s-version-292036 kubelet[1208]: E0528 22:07:27.254400    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.733387 1429762 out.go:239]   May 28 22:07:35 old-k8s-version-292036 kubelet[1208]: E0528 22:07:35.254891    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.733396 1429762 out.go:239]   May 28 22:07:38 old-k8s-version-292036 kubelet[1208]: E0528 22:07:38.254217    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0528 22:07:53.733419 1429762 out.go:239]   May 28 22:07:49 old-k8s-version-292036 kubelet[1208]: E0528 22:07:49.262987    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0528 22:07:53.733426 1429762 out.go:304] Setting ErrFile to fd 2...
	I0528 22:07:53.733434 1429762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 22:08:03.735508 1429762 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0528 22:08:03.745216 1429762 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0528 22:08:03.747464 1429762 out.go:177] 
	W0528 22:08:03.749118 1429762 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0528 22:08:03.749161 1429762 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0528 22:08:03.749190 1429762 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0528 22:08:03.749196 1429762 out.go:239] * 
	W0528 22:08:03.750161 1429762 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0528 22:08:03.752030 1429762 out.go:177] 
	
	
	==> Docker <==
	May 28 22:07:42 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:42 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:42 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:42 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:42 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:42 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:42 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:52 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:52 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:52 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:52 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:07:53 old-k8s-version-292036 dockerd[963]: 2024/05/28 22:07:53 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 28 22:08:02 old-k8s-version-292036 dockerd[963]: time="2024-05-28T22:08:02.277937226Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" spanID=35fe2e31f944d0de traceID=92e5dcb438b4fc698245d2893816001c
	May 28 22:08:02 old-k8s-version-292036 dockerd[963]: time="2024-05-28T22:08:02.278003555Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" spanID=35fe2e31f944d0de traceID=92e5dcb438b4fc698245d2893816001c
	May 28 22:08:02 old-k8s-version-292036 dockerd[963]: time="2024-05-28T22:08:02.280927575Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" spanID=35fe2e31f944d0de traceID=92e5dcb438b4fc698245d2893816001c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	62900a727b018       ba04bb24b9575                                                                                         5 minutes ago       Running             storage-provisioner       2                   7d19a0334e730       storage-provisioner
	34ca6395814f8       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   661d47d25c723       kubernetes-dashboard-cd95d586-7xnc2
	11f08e40d7a46       db91994f4ee8f                                                                                         5 minutes ago       Running             coredns                   1                   7fb6264f8af09       coredns-74ff55c5b-xpq7c
	72b9cad3b2b45       1611cd07b61d5                                                                                         5 minutes ago       Running             busybox                   1                   7d4af29538b36       busybox
	ba1d077e39b47       ba04bb24b9575                                                                                         5 minutes ago       Exited              storage-provisioner       1                   7d19a0334e730       storage-provisioner
	7072b62ac073b       25a5233254979                                                                                         5 minutes ago       Running             kube-proxy                1                   fc391e8bff9ca       kube-proxy-cnv4j
	ddf1864687ea8       05b738aa1bc63                                                                                         6 minutes ago       Running             etcd                      1                   ab0b0a37561c9       etcd-old-k8s-version-292036
	f10e2010fb5d3       e7605f88f17d6                                                                                         6 minutes ago       Running             kube-scheduler            1                   9ab1e0f236e6b       kube-scheduler-old-k8s-version-292036
	6f5dbe5b1578d       2c08bbbc02d3a                                                                                         6 minutes ago       Running             kube-apiserver            1                   432fec4782da1       kube-apiserver-old-k8s-version-292036
	7f2f40603d14f       1df8a2b116bd1                                                                                         6 minutes ago       Running             kube-controller-manager   1                   d46118142ebe9       kube-controller-manager-old-k8s-version-292036
	f6c6985099f5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              busybox                   0                   68e90ac343bbd       busybox
	adc72a2716759       db91994f4ee8f                                                                                         7 minutes ago       Exited              coredns                   0                   477e74b914845       coredns-74ff55c5b-xpq7c
	2827d087d0f08       25a5233254979                                                                                         8 minutes ago       Exited              kube-proxy                0                   7219a5ca6f801       kube-proxy-cnv4j
	08f05bfb7e4aa       e7605f88f17d6                                                                                         8 minutes ago       Exited              kube-scheduler            0                   ff602292f0367       kube-scheduler-old-k8s-version-292036
	b3da6daaeceb2       05b738aa1bc63                                                                                         8 minutes ago       Exited              etcd                      0                   cea2bbed45e70       etcd-old-k8s-version-292036
	81b8f1bbcbaa4       1df8a2b116bd1                                                                                         8 minutes ago       Exited              kube-controller-manager   0                   0c65b39202290       kube-controller-manager-old-k8s-version-292036
	2ca82d3e185e4       2c08bbbc02d3a                                                                                         8 minutes ago       Exited              kube-apiserver            0                   2505987e854cb       kube-apiserver-old-k8s-version-292036
	
	
	==> coredns [11f08e40d7a4] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:49091 - 20371 "HINFO IN 8367972525928677374.7541641708929202855. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012388159s
	
	
	==> coredns [adc72a271675] <==
	I0528 22:00:36.224021       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-28 22:00:06.223402211 +0000 UTC m=+0.068315698) (total time: 30.000490443s):
	Trace[2019727887]: [30.000490443s] [30.000490443s] END
	I0528 22:00:36.224321       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-28 22:00:06.223997818 +0000 UTC m=+0.068911329) (total time: 30.000305691s):
	Trace[1427131847]: [30.000305691s] [30.000305691s] END
	E0528 22:00:36.224347       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0528 22:00:36.224422       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-28 22:00:06.224189633 +0000 UTC m=+0.069103112) (total time: 30.000221623s):
	Trace[911902081]: [30.000221623s] [30.000221623s] END
	E0528 22:00:36.224429       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0528 22:00:36.224332       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	[INFO] Reloading complete
	[INFO] 127.0.0.1:45920 - 2387 "HINFO IN 6528128709998955322.4268978496498314789. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021258979s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-292036
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-292036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c95b4fdda455689199e2a93674568b261e34dc82
	                    minikube.k8s.io/name=old-k8s-version-292036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_28T21_59_46_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 May 2024 21:59:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-292036
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 May 2024 22:07:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 May 2024 22:03:06 +0000   Tue, 28 May 2024 21:59:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 May 2024 22:03:06 +0000   Tue, 28 May 2024 21:59:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 May 2024 22:03:06 +0000   Tue, 28 May 2024 21:59:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 May 2024 22:03:06 +0000   Tue, 28 May 2024 22:00:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-292036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022436Ki
	  pods:               110
	System Info:
	  Machine ID:                 70a382fcf83d499aaa46d364f7e8a7a5
	  System UUID:                49bc0da8-2e17-414e-8237-2f5a9fa3108d
	  Boot ID:                    869fd7c8-60a7-4ae5-b10f-ba225f4e7da7
	  Kernel Version:             5.15.0-1062-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://26.1.3
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	  kube-system                 coredns-74ff55c5b-xpq7c                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m1s
	  kube-system                 etcd-old-k8s-version-292036                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-apiserver-old-k8s-version-292036             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-controller-manager-old-k8s-version-292036    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 kube-proxy-cnv4j                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kube-system                 kube-scheduler-old-k8s-version-292036             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m13s
	  kube-system                 metrics-server-9975d5f86-5vgg7                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m24s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-mx2fg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-7xnc2               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (4%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m33s (x6 over 8m33s)  kubelet     Node old-k8s-version-292036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m33s (x6 over 8m33s)  kubelet     Node old-k8s-version-292036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m33s (x5 over 8m33s)  kubelet     Node old-k8s-version-292036 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m14s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m14s                  kubelet     Node old-k8s-version-292036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m14s                  kubelet     Node old-k8s-version-292036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m14s                  kubelet     Node old-k8s-version-292036 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m13s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m3s                   kubelet     Node old-k8s-version-292036 status is now: NodeReady
	  Normal  Starting                 7m59s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m2s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m1s (x8 over 6m1s)    kubelet     Node old-k8s-version-292036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)    kubelet     Node old-k8s-version-292036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x7 over 6m1s)    kubelet     Node old-k8s-version-292036 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m46s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.004377] FS-Cache: Duplicate cookie detected
	[  +0.000827] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.001091] FS-Cache: O-cookie d=000000001a3d9142{9p.inode} n=00000000267a30f6
	[  +0.001287] FS-Cache: O-key=[8] '0a73ed0000000000'
	[  +0.000775] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001020] FS-Cache: N-cookie d=000000001a3d9142{9p.inode} n=00000000bac89902
	[  +0.001082] FS-Cache: N-key=[8] '0a73ed0000000000'
	[  +3.205466] FS-Cache: Duplicate cookie detected
	[  +0.000714] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001035] FS-Cache: O-cookie d=000000001a3d9142{9p.inode} n=00000000122b7051
	[  +0.001060] FS-Cache: O-key=[8] '0973ed0000000000'
	[  +0.000770] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=000000001a3d9142{9p.inode} n=00000000d9affdc7
	[  +0.001053] FS-Cache: N-key=[8] '0973ed0000000000'
	[  +0.302270] FS-Cache: Duplicate cookie detected
	[  +0.000729] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.000987] FS-Cache: O-cookie d=000000001a3d9142{9p.inode} n=00000000ad395820
	[  +0.001072] FS-Cache: O-key=[8] '0f73ed0000000000'
	[  +0.000797] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001001] FS-Cache: N-cookie d=000000001a3d9142{9p.inode} n=00000000dd9c9154
	[  +0.001103] FS-Cache: N-key=[8] '0f73ed0000000000'
	[May28 21:47] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.004879] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.004334] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	[  +0.146623] overlayfs: '/var/lib/docker/overlay2/l/WCSESTVH3U25P3IZ5LAJCY2BWZ' not a directory
	
	
	==> etcd [b3da6daaeceb] <==
	raft2024/05/28 21:59:33 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/05/28 21:59:33 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/05/28 21:59:33 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-05-28 21:59:33.541415 I | etcdserver: setting up the initial cluster version to 3.4
	2024-05-28 21:59:33.545113 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-05-28 21:59:33.545174 I | etcdserver/api: enabled capabilities for version 3.4
	2024-05-28 21:59:33.545205 I | etcdserver: published {Name:old-k8s-version-292036 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-05-28 21:59:33.545214 I | embed: ready to serve client requests
	2024-05-28 21:59:33.546805 I | embed: serving client requests on 127.0.0.1:2379
	2024-05-28 21:59:33.546997 I | embed: ready to serve client requests
	2024-05-28 21:59:33.557121 I | embed: serving client requests on 192.168.85.2:2379
	2024-05-28 22:00:01.193737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:00:10.202332 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:00:20.202249 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:00:30.202459 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:00:40.202281 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:00:50.202291 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:01:00.204824 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:01:10.202403 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:01:20.202651 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:01:30.203498 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:01:40.205655 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:01:40.945065 N | pkg/osutil: received terminated signal, shutting down...
	2024-05-28 22:01:40.988341 I | etcdserver: skipped leadership transfer for single voting member cluster
	WARNING: 2024/05/28 22:01:40 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: operation was canceled". Reconnecting...
	
	
	==> etcd [ddf1864687ea] <==
	2024-05-28 22:03:59.480224 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:04:09.480155 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:04:19.480376 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:04:29.480333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:04:39.480327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:04:49.480195 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:04:59.480179 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:05:09.480377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:05:19.480106 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:05:29.480136 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:05:39.480382 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:05:49.480370 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:05:59.480360 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:06:09.480280 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:06:19.480636 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:06:29.480239 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:06:39.480218 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:06:49.480406 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:06:59.480302 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:07:09.480561 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:07:19.480142 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:07:29.480191 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:07:39.480237 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:07:49.480283 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-28 22:07:59.480288 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 22:08:05 up  5:50,  0 users,  load average: 3.24, 2.68, 3.55
	Linux old-k8s-version-292036 5.15.0-1062-aws #68~20.04.1-Ubuntu SMP Tue May 7 11:50:35 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [2ca82d3e185e] <==
	W0528 22:01:41.027603       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0528 22:01:41.027661       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0528 22:01:41.027763       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0528 22:01:41.027934       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	I0528 22:01:41.028060       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	W0528 22:01:41.028178       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028211       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028249       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028285       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028320       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028354       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028396       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028433       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028469       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028506       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028546       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028584       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028622       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028660       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028744       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028784       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028824       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028866       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028910       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0528 22:01:41.028952       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-apiserver [6f5dbe5b1578] <==
	I0528 22:04:40.013869       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:04:40.013880       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0528 22:05:18.584033       1 handler_proxy.go:102] no RequestInfo found in the context
	E0528 22:05:18.584211       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:05:18.584227       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0528 22:05:24.157055       1 client.go:360] parsed scheme: "passthrough"
	I0528 22:05:24.157120       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:05:24.157130       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0528 22:06:00.768601       1 client.go:360] parsed scheme: "passthrough"
	I0528 22:06:00.768647       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:06:00.768687       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0528 22:06:43.953007       1 client.go:360] parsed scheme: "passthrough"
	I0528 22:06:43.953070       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:06:43.953080       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0528 22:07:16.680972       1 handler_proxy.go:102] no RequestInfo found in the context
	E0528 22:07:16.681065       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0528 22:07:16.681087       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0528 22:07:18.784063       1 client.go:360] parsed scheme: "passthrough"
	I0528 22:07:18.784290       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:07:18.784379       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0528 22:07:50.373383       1 client.go:360] parsed scheme: "passthrough"
	I0528 22:07:50.373429       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0528 22:07:50.373440       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [7f2f40603d14] <==
	W0528 22:03:39.436202       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:04:04.854408       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:04:11.086650       1 request.go:655] Throttling request took 1.047419057s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:04:11.938419       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:04:35.360967       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:04:43.588861       1 request.go:655] Throttling request took 1.047924955s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:04:44.440193       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:05:05.863533       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:05:16.090574       1 request.go:655] Throttling request took 1.047923943s, request: GET:https://192.168.85.2:8443/apis/networking.k8s.io/v1?timeout=32s
	W0528 22:05:16.944203       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:05:36.366135       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:05:48.594752       1 request.go:655] Throttling request took 1.047115401s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:05:49.446424       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:06:06.923176       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:06:21.097012       1 request.go:655] Throttling request took 1.048503328s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:06:21.948424       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:06:37.425277       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:06:53.598988       1 request.go:655] Throttling request took 1.048097956s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:06:54.450563       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:07:07.933909       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:07:26.100936       1 request.go:655] Throttling request took 1.048276757s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:07:26.952609       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0528 22:07:38.435688       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0528 22:07:58.604190       1 request.go:655] Throttling request took 1.048280758s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0528 22:07:59.455548       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [81b8f1bbcbaa] <==
	I0528 22:00:03.261998       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0528 22:00:03.262305       1 shared_informer.go:247] Caches are synced for TTL 
	I0528 22:00:03.262318       1 shared_informer.go:247] Caches are synced for attach detach 
	I0528 22:00:03.385019       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0528 22:00:03.388113       1 range_allocator.go:373] Set node old-k8s-version-292036 PodCIDR to [10.244.0.0/24]
	I0528 22:00:03.437964       1 shared_informer.go:247] Caches are synced for job 
	I0528 22:00:03.455722       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0528 22:00:03.455756       1 shared_informer.go:247] Caches are synced for disruption 
	I0528 22:00:03.455762       1 disruption.go:339] Sending events to api server.
	I0528 22:00:03.471935       1 shared_informer.go:247] Caches are synced for resource quota 
	I0528 22:00:03.472000       1 shared_informer.go:247] Caches are synced for resource quota 
	I0528 22:00:03.472011       1 shared_informer.go:247] Caches are synced for deployment 
	E0528 22:00:03.472101       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0528 22:00:03.493739       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0528 22:00:03.497218       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cnv4j"
	I0528 22:00:03.561799       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-6pmc4"
	I0528 22:00:03.602393       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-xpq7c"
	I0528 22:00:03.649978       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0528 22:00:03.862179       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0528 22:00:03.929038       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0528 22:00:03.929068       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0528 22:00:07.049052       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0528 22:00:07.084090       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-6pmc4"
	I0528 22:01:39.596172       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0528 22:01:39.745614       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [2827d087d0f0] <==
	I0528 22:00:05.829509       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0528 22:00:05.829606       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0528 22:00:05.949491       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0528 22:00:05.949578       1 server_others.go:185] Using iptables Proxier.
	I0528 22:00:05.949777       1 server.go:650] Version: v1.20.0
	I0528 22:00:05.950278       1 config.go:315] Starting service config controller
	I0528 22:00:05.950287       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0528 22:00:05.962184       1 config.go:224] Starting endpoint slice config controller
	I0528 22:00:05.962207       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0528 22:00:06.052335       1 shared_informer.go:247] Caches are synced for service config 
	I0528 22:00:06.063037       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [7072b62ac073] <==
	I0528 22:02:18.040229       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0528 22:02:18.040317       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0528 22:02:18.080693       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0528 22:02:18.081057       1 server_others.go:185] Using iptables Proxier.
	I0528 22:02:18.081434       1 server.go:650] Version: v1.20.0
	I0528 22:02:18.082552       1 config.go:315] Starting service config controller
	I0528 22:02:18.082629       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0528 22:02:18.082690       1 config.go:224] Starting endpoint slice config controller
	I0528 22:02:18.082728       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0528 22:02:18.182790       1 shared_informer.go:247] Caches are synced for service config 
	I0528 22:02:18.182859       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [08f05bfb7e4a] <==
	E0528 21:59:42.668697       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 21:59:42.671470       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0528 21:59:42.672337       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 21:59:42.672551       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 21:59:42.672777       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 21:59:42.673272       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 21:59:42.673592       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 21:59:42.673816       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 21:59:43.578281       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0528 21:59:43.620844       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0528 21:59:43.698535       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0528 21:59:43.704224       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0528 21:59:43.778355       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0528 21:59:43.870573       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0528 21:59:43.870770       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0528 21:59:43.893010       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0528 21:59:43.912691       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0528 21:59:43.916395       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0528 21:59:44.206375       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0528 21:59:44.268519       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0528 21:59:46.245272       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0528 22:01:40.783230       1 framework.go:777] "Failed running Bind plugin" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/metrics-server-9975d5f86-5vgg7/binding\": unexpected EOF" plugin="DefaultBinder" pod="kube-system/metrics-server-9975d5f86-5vgg7"
	E0528 22:01:40.783347       1 factory.go:337] "Error scheduling pod; retrying" err="binding rejected: running Bind plugin \"DefaultBinder\": Post \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/metrics-server-9975d5f86-5vgg7/binding\": unexpected EOF" pod="kube-system/metrics-server-9975d5f86-5vgg7"
	E0528 22:01:40.806916       1 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"metrics-server-9975d5f86-5vgg7.17d3c61aaa173096", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"metrics-server-9975d5f86-5vgg7", UID:"aa9ea843-e56b-405a-861d-c9501c664da7", APIVersion:"v1", ResourceVersion:"585", FieldPath:""}, Reason:"FailedScheduling", Message:"binding rejected: running Bind plugin \"DefaultBinder\": Post \"https://192.168.85.2:8443/api/v1/namespaces/
kube-system/pods/metrics-server-9975d5f86-5vgg7/binding\": unexpected EOF", Source:v1.EventSource{Component:"default-scheduler", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc18db2f12eb18896, ext:127718546896, loc:(*time.Location)(0x25fc580)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc18db2f12eb18896, ext:127718546896, loc:(*time.Location)(0x25fc580)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://192.168.85.2:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.85.2:8443: connect: connection refused'(may retry after sleeping)
	E0528 22:01:40.814542       1 scheduler.go:338] Error updating pod kube-system/metrics-server-9975d5f86-5vgg7: Patch "https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/metrics-server-9975d5f86-5vgg7/status": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> kube-scheduler [f10e2010fb5d] <==
	I0528 22:02:11.122517       1 serving.go:331] Generated self-signed cert in-memory
	W0528 22:02:15.378225       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0528 22:02:15.378255       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0528 22:02:15.378287       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0528 22:02:15.378293       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0528 22:02:15.700521       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0528 22:02:15.708303       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 22:02:15.710152       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0528 22:02:15.710193       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0528 22:02:15.811256       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	May 28 22:05:42 old-k8s-version-292036 kubelet[1208]: E0528 22:05:42.722092    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	May 28 22:05:50 old-k8s-version-292036 kubelet[1208]: E0528 22:05:50.254810    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:05:56 old-k8s-version-292036 kubelet[1208]: E0528 22:05:56.262695    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 28 22:06:01 old-k8s-version-292036 kubelet[1208]: E0528 22:06:01.254241    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:06:09 old-k8s-version-292036 kubelet[1208]: E0528 22:06:09.281534    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 28 22:06:15 old-k8s-version-292036 kubelet[1208]: E0528 22:06:15.254396    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:06:21 old-k8s-version-292036 kubelet[1208]: E0528 22:06:21.257196    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 28 22:06:30 old-k8s-version-292036 kubelet[1208]: E0528 22:06:30.254217    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:06:36 old-k8s-version-292036 kubelet[1208]: E0528 22:06:36.254429    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 28 22:06:42 old-k8s-version-292036 kubelet[1208]: E0528 22:06:42.254602    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:06:50 old-k8s-version-292036 kubelet[1208]: E0528 22:06:50.262748    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 28 22:06:55 old-k8s-version-292036 kubelet[1208]: E0528 22:06:55.258724    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:07:03 old-k8s-version-292036 kubelet[1208]: E0528 22:07:03.262974    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 28 22:07:10 old-k8s-version-292036 kubelet[1208]: E0528 22:07:10.255540    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:07:16 old-k8s-version-292036 kubelet[1208]: E0528 22:07:16.254105    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 28 22:07:24 old-k8s-version-292036 kubelet[1208]: E0528 22:07:24.254562    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:07:27 old-k8s-version-292036 kubelet[1208]: E0528 22:07:27.254400    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 28 22:07:35 old-k8s-version-292036 kubelet[1208]: E0528 22:07:35.254891    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:07:38 old-k8s-version-292036 kubelet[1208]: E0528 22:07:38.254217    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 28 22:07:49 old-k8s-version-292036 kubelet[1208]: E0528 22:07:49.262987    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 28 22:07:53 old-k8s-version-292036 kubelet[1208]: E0528 22:07:53.265240    1208 pod_workers.go:191] Error syncing pod 4973395c-b7f1-44eb-84e8-5a2363659c72 ("dashboard-metrics-scraper-8d5bb5db8-mx2fg_kubernetes-dashboard(4973395c-b7f1-44eb-84e8-5a2363659c72)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 28 22:08:02 old-k8s-version-292036 kubelet[1208]: E0528 22:08:02.281451    1208 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	May 28 22:08:02 old-k8s-version-292036 kubelet[1208]: E0528 22:08:02.281503    1208 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	May 28 22:08:02 old-k8s-version-292036 kubelet[1208]: E0528 22:08:02.281647    1208 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-wnxgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-5vgg7_kube-system(aa9ea8
43-e56b-405a-861d-c9501c664da7): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	May 28 22:08:02 old-k8s-version-292036 kubelet[1208]: E0528 22:08:02.281680    1208 pod_workers.go:191] Error syncing pod aa9ea843-e56b-405a-861d-c9501c664da7 ("metrics-server-9975d5f86-5vgg7_kube-system(aa9ea843-e56b-405a-861d-c9501c664da7)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [34ca6395814f] <==
	2024/05/28 22:02:39 Starting overwatch
	2024/05/28 22:02:39 Using namespace: kubernetes-dashboard
	2024/05/28 22:02:39 Using in-cluster config to connect to apiserver
	2024/05/28 22:02:39 Using secret token for csrf signing
	2024/05/28 22:02:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/05/28 22:02:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/05/28 22:02:39 Successful initial request to the apiserver, version: v1.20.0
	2024/05/28 22:02:39 Generating JWE encryption key
	2024/05/28 22:02:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/05/28 22:02:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/05/28 22:02:40 Initializing JWE encryption key from synchronized object
	2024/05/28 22:02:40 Creating in-cluster Sidecar client
	2024/05/28 22:02:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:02:40 Serving insecurely on HTTP port: 9090
	2024/05/28 22:03:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:03:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:04:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:04:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:05:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:05:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:06:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:06:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:07:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/28 22:07:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [62900a727b01] <==
	I0528 22:03:01.392925       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0528 22:03:01.422491       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0528 22:03:01.422607       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0528 22:03:18.895380       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0528 22:03:18.895853       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"096ab19d-7bc1-44ba-a792-ce71040e6d09", APIVersion:"v1", ResourceVersion:"799", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-292036_f6aa811b-f5bd-4d65-a222-e6d296aec8c0 became leader
	I0528 22:03:18.895934       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-292036_f6aa811b-f5bd-4d65-a222-e6d296aec8c0!
	I0528 22:03:18.996603       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-292036_f6aa811b-f5bd-4d65-a222-e6d296aec8c0!
	
	
	==> storage-provisioner [ba1d077e39b4] <==
	I0528 22:02:17.925679       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0528 22:02:47.928138       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-292036 -n old-k8s-version-292036
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-292036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-5vgg7 dashboard-metrics-scraper-8d5bb5db8-mx2fg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-292036 describe pod metrics-server-9975d5f86-5vgg7 dashboard-metrics-scraper-8d5bb5db8-mx2fg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-292036 describe pod metrics-server-9975d5f86-5vgg7 dashboard-metrics-scraper-8d5bb5db8-mx2fg: exit status 1 (101.210804ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-5vgg7" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-mx2fg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-292036 describe pod metrics-server-9975d5f86-5vgg7 dashboard-metrics-scraper-8d5bb5db8-mx2fg: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.30s)

                                                
                                    

Test pass (316/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.08
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.1/json-events 7.41
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.07
18 TestDownloadOnly/v1.30.1/DeleteAll 0.2
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.57
22 TestOffline 100.81
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 227.49
29 TestAddons/parallel/Registry 17
31 TestAddons/parallel/InspektorGadget 11.85
32 TestAddons/parallel/MetricsServer 6.75
35 TestAddons/parallel/CSI 62.97
36 TestAddons/parallel/Headlamp 12
37 TestAddons/parallel/CloudSpanner 5.51
38 TestAddons/parallel/LocalPath 52.31
39 TestAddons/parallel/NvidiaDevicePlugin 5.48
40 TestAddons/parallel/Yakd 6
41 TestAddons/parallel/Volcano 35.26
44 TestAddons/serial/GCPAuth/Namespaces 0.16
45 TestAddons/StoppedEnableDisable 11.13
46 TestCertOptions 40.99
47 TestCertExpiration 250.74
48 TestDockerFlags 45.91
49 TestForceSystemdFlag 48.9
50 TestForceSystemdEnv 39.21
56 TestErrorSpam/setup 30.6
57 TestErrorSpam/start 0.7
58 TestErrorSpam/status 0.98
59 TestErrorSpam/pause 1.28
60 TestErrorSpam/unpause 1.39
61 TestErrorSpam/stop 10.99
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 85.31
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 33.44
68 TestFunctional/serial/KubeContext 0.08
69 TestFunctional/serial/KubectlGetPods 0.17
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.84
73 TestFunctional/serial/CacheCmd/cache/add_local 0.96
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 119.04
83 TestFunctional/serial/LogsCmd 1.26
84 TestFunctional/serial/LogsFileCmd 1.15
85 TestFunctional/serial/InvalidService 4.92
87 TestFunctional/parallel/ConfigCmd 0.45
88 TestFunctional/parallel/DashboardCmd 11.77
89 TestFunctional/parallel/DryRun 0.48
90 TestFunctional/parallel/InternationalLanguage 0.26
91 TestFunctional/parallel/StatusCmd 1.2
95 TestFunctional/parallel/ServiceCmdConnect 11.66
96 TestFunctional/parallel/AddonsCmd 0.18
97 TestFunctional/parallel/PersistentVolumeClaim 26.64
99 TestFunctional/parallel/SSHCmd 0.66
100 TestFunctional/parallel/CpCmd 2.28
102 TestFunctional/parallel/FileSync 0.33
103 TestFunctional/parallel/CertSync 2.06
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.32
111 TestFunctional/parallel/License 0.28
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.44
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.26
124 TestFunctional/parallel/ServiceCmd/List 0.55
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
127 TestFunctional/parallel/ProfileCmd/profile_list 0.46
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
130 TestFunctional/parallel/ServiceCmd/Format 0.49
131 TestFunctional/parallel/MountCmd/any-port 8.34
132 TestFunctional/parallel/ServiceCmd/URL 0.5
133 TestFunctional/parallel/MountCmd/specific-port 2.14
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.23
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.23
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.69
142 TestFunctional/parallel/ImageCommands/Setup 2.47
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
146 TestFunctional/parallel/DockerEnv/bash 1.21
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.3
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.82
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.97
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.85
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.27
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.98
154 TestFunctional/delete_addon-resizer_images 0.1
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 136.22
161 TestMultiControlPlane/serial/DeployApp 42.58
162 TestMultiControlPlane/serial/PingHostFromPods 1.78
163 TestMultiControlPlane/serial/AddWorkerNode 29.27
164 TestMultiControlPlane/serial/NodeLabels 0.11
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.72
166 TestMultiControlPlane/serial/CopyFile 18.9
167 TestMultiControlPlane/serial/StopSecondaryNode 11.77
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.55
169 TestMultiControlPlane/serial/RestartSecondaryNode 55.4
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.76
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 227.19
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.86
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.58
174 TestMultiControlPlane/serial/StopCluster 32.68
175 TestMultiControlPlane/serial/RestartCluster 82.11
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.54
177 TestMultiControlPlane/serial/AddSecondaryNode 44.35
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.78
181 TestImageBuild/serial/Setup 31.94
182 TestImageBuild/serial/NormalBuild 1.89
183 TestImageBuild/serial/BuildWithBuildArg 0.87
184 TestImageBuild/serial/BuildWithDockerIgnore 0.7
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.73
189 TestJSONOutput/start/Command 46.44
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.58
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.54
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.88
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.22
214 TestKicCustomNetwork/create_custom_network 33.32
215 TestKicCustomNetwork/use_default_bridge_network 32.98
216 TestKicExistingNetwork 37.8
217 TestKicCustomSubnet 31.98
218 TestKicStaticIP 34.53
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 69.9
223 TestMountStart/serial/StartWithMountFirst 7.49
224 TestMountStart/serial/VerifyMountFirst 0.25
225 TestMountStart/serial/StartWithMountSecond 7.59
226 TestMountStart/serial/VerifyMountSecond 0.27
227 TestMountStart/serial/DeleteFirst 1.47
228 TestMountStart/serial/VerifyMountPostDelete 0.24
229 TestMountStart/serial/Stop 1.21
230 TestMountStart/serial/RestartStopped 8.65
231 TestMountStart/serial/VerifyMountPostStop 0.25
234 TestMultiNode/serial/FreshStart2Nodes 67.02
235 TestMultiNode/serial/DeployApp2Nodes 37.93
236 TestMultiNode/serial/PingHostFrom2Pods 0.98
237 TestMultiNode/serial/AddNode 17.64
238 TestMultiNode/serial/MultiNodeLabels 0.11
239 TestMultiNode/serial/ProfileList 0.38
240 TestMultiNode/serial/CopyFile 9.85
241 TestMultiNode/serial/StopNode 2.21
242 TestMultiNode/serial/StartAfterStop 10.98
243 TestMultiNode/serial/RestartKeepsNodes 65.83
244 TestMultiNode/serial/DeleteNode 5.48
245 TestMultiNode/serial/StopMultiNode 21.71
246 TestMultiNode/serial/RestartMultiNode 57.77
247 TestMultiNode/serial/ValidateNameConflict 38.65
252 TestPreload 108.61
254 TestScheduledStopUnix 105.76
255 TestSkaffold 120.15
257 TestInsufficientStorage 10.94
258 TestRunningBinaryUpgrade 84.54
260 TestKubernetesUpgrade 368.28
261 TestMissingContainerUpgrade 116.34
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
264 TestNoKubernetes/serial/StartWithK8s 44.83
265 TestNoKubernetes/serial/StartWithStopK8s 7.9
266 TestNoKubernetes/serial/Start 10.02
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
268 TestNoKubernetes/serial/ProfileList 0.96
269 TestNoKubernetes/serial/Stop 1.22
270 TestNoKubernetes/serial/StartNoArgs 7.32
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
283 TestStoppedBinaryUpgrade/Setup 1.18
284 TestStoppedBinaryUpgrade/Upgrade 117.63
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.36
294 TestPause/serial/Start 87.52
295 TestPause/serial/SecondStartNoReconfiguration 35.45
296 TestPause/serial/Pause 0.55
297 TestPause/serial/VerifyStatus 0.32
298 TestPause/serial/Unpause 0.49
299 TestPause/serial/PauseAgain 0.68
300 TestPause/serial/DeletePaused 2.23
301 TestPause/serial/VerifyDeletedResources 14.28
302 TestNetworkPlugins/group/auto/Start 87.85
303 TestNetworkPlugins/group/auto/KubeletFlags 0.37
304 TestNetworkPlugins/group/auto/NetCatPod 13.41
305 TestNetworkPlugins/group/flannel/Start 71.66
306 TestNetworkPlugins/group/auto/DNS 0.23
307 TestNetworkPlugins/group/auto/Localhost 0.2
308 TestNetworkPlugins/group/auto/HairPin 0.24
309 TestNetworkPlugins/group/calico/Start 80.21
310 TestNetworkPlugins/group/flannel/ControllerPod 6.01
311 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
312 TestNetworkPlugins/group/flannel/NetCatPod 12.35
313 TestNetworkPlugins/group/flannel/DNS 0.3
314 TestNetworkPlugins/group/flannel/Localhost 0.26
315 TestNetworkPlugins/group/flannel/HairPin 0.25
316 TestNetworkPlugins/group/calico/ControllerPod 6.02
317 TestNetworkPlugins/group/calico/KubeletFlags 0.45
318 TestNetworkPlugins/group/calico/NetCatPod 13.33
319 TestNetworkPlugins/group/custom-flannel/Start 71.44
320 TestNetworkPlugins/group/calico/DNS 0.28
321 TestNetworkPlugins/group/calico/Localhost 0.31
322 TestNetworkPlugins/group/calico/HairPin 0.22
323 TestNetworkPlugins/group/false/Start 52.62
324 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
325 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.35
326 TestNetworkPlugins/group/custom-flannel/DNS 0.21
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
329 TestNetworkPlugins/group/false/KubeletFlags 0.4
330 TestNetworkPlugins/group/false/NetCatPod 12.37
331 TestNetworkPlugins/group/false/DNS 0.2
332 TestNetworkPlugins/group/false/Localhost 0.18
333 TestNetworkPlugins/group/false/HairPin 0.25
334 TestNetworkPlugins/group/kindnet/Start 73.34
335 TestNetworkPlugins/group/kubenet/Start 54.95
336 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
337 TestNetworkPlugins/group/kubenet/KubeletFlags 0.28
338 TestNetworkPlugins/group/kubenet/NetCatPod 11.26
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
340 TestNetworkPlugins/group/kindnet/NetCatPod 10.34
341 TestNetworkPlugins/group/kubenet/DNS 0.21
342 TestNetworkPlugins/group/kubenet/Localhost 0.16
343 TestNetworkPlugins/group/kubenet/HairPin 0.18
344 TestNetworkPlugins/group/kindnet/DNS 0.29
345 TestNetworkPlugins/group/kindnet/Localhost 0.16
346 TestNetworkPlugins/group/kindnet/HairPin 0.19
347 TestNetworkPlugins/group/enable-default-cni/Start 58.32
348 TestNetworkPlugins/group/bridge/Start 94.62
349 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
350 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.28
351 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
352 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
353 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
355 TestStartStop/group/old-k8s-version/serial/FirstStart 153.48
356 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
357 TestNetworkPlugins/group/bridge/NetCatPod 11.31
358 TestNetworkPlugins/group/bridge/DNS 0.18
359 TestNetworkPlugins/group/bridge/Localhost 0.21
360 TestNetworkPlugins/group/bridge/HairPin 0.2
362 TestStartStop/group/embed-certs/serial/FirstStart 91.94
363 TestStartStop/group/embed-certs/serial/DeployApp 9.39
364 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
365 TestStartStop/group/embed-certs/serial/Stop 11.02
366 TestStartStop/group/old-k8s-version/serial/DeployApp 8.75
367 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
368 TestStartStop/group/embed-certs/serial/SecondStart 290.45
369 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.49
370 TestStartStop/group/old-k8s-version/serial/Stop 11.39
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
373 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
375 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
376 TestStartStop/group/embed-certs/serial/Pause 2.83
378 TestStartStop/group/no-preload/serial/FirstStart 69.92
379 TestStartStop/group/no-preload/serial/DeployApp 7.35
380 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.09
381 TestStartStop/group/no-preload/serial/Stop 10.83
382 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
384 TestStartStop/group/no-preload/serial/SecondStart 269.38
385 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.17
386 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
387 TestStartStop/group/old-k8s-version/serial/Pause 4.01
389 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 49.39
390 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
391 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.04
392 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.88
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
394 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.08
395 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
397 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
398 TestStartStop/group/no-preload/serial/Pause 2.8
400 TestStartStop/group/newest-cni/serial/FirstStart 45.29
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.27
403 TestStartStop/group/newest-cni/serial/Stop 10.91
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
405 TestStartStop/group/newest-cni/serial/SecondStart 17.63
406 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
407 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
408 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
409 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
411 TestStartStop/group/newest-cni/serial/Pause 2.72
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.12
x
+
TestDownloadOnly/v1.20.0/json-events (8.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-017885 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-017885 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.077180058s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-017885
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-017885: exit status 85 (78.786604ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-017885 | jenkins | v1.33.1 | 28 May 24 20:56 UTC |          |
	|         | -p download-only-017885        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 20:56:38
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 20:56:38.659215 1070314 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:56:38.659562 1070314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:56:38.659577 1070314 out.go:304] Setting ErrFile to fd 2...
	I0528 20:56:38.659584 1070314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:56:38.659939 1070314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	W0528 20:56:38.660241 1070314 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18966-1064873/.minikube/config/config.json: open /home/jenkins/minikube-integration/18966-1064873/.minikube/config/config.json: no such file or directory
	I0528 20:56:38.660770 1070314 out.go:298] Setting JSON to true
	I0528 20:56:38.661679 1070314 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16748,"bootTime":1716913051,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0528 20:56:38.661808 1070314 start.go:139] virtualization:  
	I0528 20:56:38.664793 1070314 out.go:97] [download-only-017885] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0528 20:56:38.664933 1070314 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/preloaded-tarball: no such file or directory
	I0528 20:56:38.666988 1070314 out.go:169] MINIKUBE_LOCATION=18966
	I0528 20:56:38.665043 1070314 notify.go:220] Checking for updates...
	I0528 20:56:38.671436 1070314 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:56:38.673527 1070314 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 20:56:38.675681 1070314 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	I0528 20:56:38.678151 1070314 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0528 20:56:38.682429 1070314 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0528 20:56:38.682735 1070314 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:56:38.703436 1070314 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 20:56:38.703538 1070314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 20:56:38.762947 1070314 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 20:56:38.754114155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 20:56:38.763064 1070314 docker.go:295] overlay module found
	I0528 20:56:38.765705 1070314 out.go:97] Using the docker driver based on user configuration
	I0528 20:56:38.765742 1070314 start.go:297] selected driver: docker
	I0528 20:56:38.765749 1070314 start.go:901] validating driver "docker" against <nil>
	I0528 20:56:38.765863 1070314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 20:56:38.819630 1070314 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 20:56:38.810877339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 20:56:38.819875 1070314 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 20:56:38.820251 1070314 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0528 20:56:38.820476 1070314 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0528 20:56:38.822961 1070314 out.go:169] Using Docker driver with root privileges
	I0528 20:56:38.824573 1070314 cni.go:84] Creating CNI manager for ""
	I0528 20:56:38.824603 1070314 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0528 20:56:38.824675 1070314 start.go:340] cluster config:
	{Name:download-only-017885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-017885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:56:38.826547 1070314 out.go:97] Starting "download-only-017885" primary control-plane node in "download-only-017885" cluster
	I0528 20:56:38.826566 1070314 cache.go:121] Beginning downloading kic base image for docker with docker
	I0528 20:56:38.828420 1070314 out.go:97] Pulling base image v0.0.44-1716228441-18934 ...
	I0528 20:56:38.828456 1070314 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0528 20:56:38.828545 1070314 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 20:56:38.846433 1070314 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 to local cache
	I0528 20:56:38.846597 1070314 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory
	I0528 20:56:38.846695 1070314 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 to local cache
	I0528 20:56:38.905314 1070314 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0528 20:56:38.905351 1070314 cache.go:56] Caching tarball of preloaded images
	I0528 20:56:38.906097 1070314 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0528 20:56:38.908439 1070314 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0528 20:56:38.908459 1070314 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0528 20:56:39.019861 1070314 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-017885 host does not exist
	  To start a cluster, run: "minikube start -p download-only-017885"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-017885
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (7.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-478907 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-478907 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.413064535s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (7.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-478907
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-478907: exit status 85 (73.132945ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-017885 | jenkins | v1.33.1 | 28 May 24 20:56 UTC |                     |
	|         | -p download-only-017885        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 28 May 24 20:56 UTC | 28 May 24 20:56 UTC |
	| delete  | -p download-only-017885        | download-only-017885 | jenkins | v1.33.1 | 28 May 24 20:56 UTC | 28 May 24 20:56 UTC |
	| start   | -o=json --download-only        | download-only-478907 | jenkins | v1.33.1 | 28 May 24 20:56 UTC |                     |
	|         | -p download-only-478907        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/28 20:56:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0528 20:56:47.149089 1070483 out.go:291] Setting OutFile to fd 1 ...
	I0528 20:56:47.149292 1070483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:56:47.149319 1070483 out.go:304] Setting ErrFile to fd 2...
	I0528 20:56:47.149337 1070483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 20:56:47.149580 1070483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	I0528 20:56:47.150009 1070483 out.go:298] Setting JSON to true
	I0528 20:56:47.150922 1070483 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":16757,"bootTime":1716913051,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0528 20:56:47.151013 1070483 start.go:139] virtualization:  
	I0528 20:56:47.153765 1070483 out.go:97] [download-only-478907] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 20:56:47.156524 1070483 out.go:169] MINIKUBE_LOCATION=18966
	I0528 20:56:47.153962 1070483 notify.go:220] Checking for updates...
	I0528 20:56:47.160361 1070483 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 20:56:47.162341 1070483 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 20:56:47.164265 1070483 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	I0528 20:56:47.166354 1070483 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0528 20:56:47.169774 1070483 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0528 20:56:47.170081 1070483 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 20:56:47.190738 1070483 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 20:56:47.190856 1070483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 20:56:47.250538 1070483 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-05-28 20:56:47.241407202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 20:56:47.250703 1070483 docker.go:295] overlay module found
	I0528 20:56:47.252849 1070483 out.go:97] Using the docker driver based on user configuration
	I0528 20:56:47.252905 1070483 start.go:297] selected driver: docker
	I0528 20:56:47.252918 1070483 start.go:901] validating driver "docker" against <nil>
	I0528 20:56:47.253044 1070483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 20:56:47.303304 1070483 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-05-28 20:56:47.294849389 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 20:56:47.303477 1070483 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0528 20:56:47.303755 1070483 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0528 20:56:47.303918 1070483 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0528 20:56:47.306879 1070483 out.go:169] Using Docker driver with root privileges
	I0528 20:56:47.308972 1070483 cni.go:84] Creating CNI manager for ""
	I0528 20:56:47.308998 1070483 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0528 20:56:47.309009 1070483 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0528 20:56:47.309092 1070483 start.go:340] cluster config:
	{Name:download-only-478907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-478907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 20:56:47.311399 1070483 out.go:97] Starting "download-only-478907" primary control-plane node in "download-only-478907" cluster
	I0528 20:56:47.311425 1070483 cache.go:121] Beginning downloading kic base image for docker with docker
	I0528 20:56:47.313317 1070483 out.go:97] Pulling base image v0.0.44-1716228441-18934 ...
	I0528 20:56:47.313341 1070483 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 20:56:47.313366 1070483 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local docker daemon
	I0528 20:56:47.328201 1070483 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 to local cache
	I0528 20:56:47.328317 1070483 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory
	I0528 20:56:47.328342 1070483 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 in local cache directory, skipping pull
	I0528 20:56:47.328348 1070483 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 exists in cache, skipping pull
	I0528 20:56:47.328356 1070483 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 as a tarball
	I0528 20:56:47.391532 1070483 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0528 20:56:47.391569 1070483 cache.go:56] Caching tarball of preloaded images
	I0528 20:56:47.392076 1070483 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0528 20:56:47.394364 1070483 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0528 20:56:47.394386 1070483 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0528 20:56:47.511691 1070483 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4?checksum=md5:7ffd0655905ace939b15286e37914582 -> /home/jenkins/minikube-integration/18966-1064873/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-478907 host does not exist
	  To start a cluster, run: "minikube start -p download-only-478907"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-478907
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-848255 --alsologtostderr --binary-mirror http://127.0.0.1:42511 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-848255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-848255
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (100.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-602515 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-602515 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m38.523720223s)
helpers_test.go:175: Cleaning up "offline-docker-602515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-602515
E0528 21:40:43.858250 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-602515: (2.290182627s)
--- PASS: TestOffline (100.81s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-885631
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-885631: exit status 85 (73.730534ms)

                                                
                                                
-- stdout --
	* Profile "addons-885631" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-885631"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-885631
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-885631: exit status 85 (69.522109ms)

                                                
                                                
-- stdout --
	* Profile "addons-885631" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-885631"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (227.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-885631 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-885631 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m47.49318091s)
--- PASS: TestAddons/Setup (227.49s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 35.357204ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-zksdm" [1fab4f53-2cf7-4c1f-abcb-b10d106e8a5b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00487903s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zjgsw" [7b440740-69b2-460e-8ffa-92e1f22ea088] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00492448s
addons_test.go:342: (dbg) Run:  kubectl --context addons-885631 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-885631 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-885631 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.71456482s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 ip
2024/05/28 21:00:59 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.00s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bmwwj" [169c7276-24ff-46a2-aed3-77fd4b52b77b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004000909s
addons_test.go:843: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-885631
addons_test.go:843: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-885631: (5.841587398s)
--- PASS: TestAddons/parallel/InspektorGadget (11.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.701065ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-r5rzf" [8612d4f8-944b-4a38-803c-b72629ba7c6e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004508391s
addons_test.go:417: (dbg) Run:  kubectl --context addons-885631 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 5.024725ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-885631 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-885631 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [85e7edbe-dcb3-4dc8-aa10-c894cc259e31] Pending
helpers_test.go:344: "task-pv-pod" [85e7edbe-dcb3-4dc8-aa10-c894cc259e31] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [85e7edbe-dcb3-4dc8-aa10-c894cc259e31] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003731087s
addons_test.go:586: (dbg) Run:  kubectl --context addons-885631 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-885631 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-885631 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-885631 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-885631 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-885631 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-885631 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bba3b4b8-a864-43b5-824b-a00acd161dd3] Pending
helpers_test.go:344: "task-pv-pod-restore" [bba3b4b8-a864-43b5-824b-a00acd161dd3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bba3b4b8-a864-43b5-824b-a00acd161dd3] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004191651s
addons_test.go:628: (dbg) Run:  kubectl --context addons-885631 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-885631 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-885631 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-arm64 -p addons-885631 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.699913041s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.97s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-885631 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-stt7b" [42ba809a-e847-44fe-9fe9-eaf96f997835] Pending
helpers_test.go:344: "headlamp-68456f997b-stt7b" [42ba809a-e847-44fe-9fe9-eaf96f997835] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-stt7b" [42ba809a-e847-44fe-9fe9-eaf96f997835] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.009262866s
--- PASS: TestAddons/parallel/Headlamp (12.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-lnsv2" [bb084760-114d-4e32-81f7-29a7daa1ec01] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003672263s
addons_test.go:862: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-885631
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-885631 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-885631 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-885631 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [214b540c-d0fc-478d-9a82-7f057f83d3eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [214b540c-d0fc-478d-9a82-7f057f83d3eb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [214b540c-d0fc-478d-9a82-7f057f83d3eb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003972162s
addons_test.go:992: (dbg) Run:  kubectl --context addons-885631 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 ssh "cat /opt/local-path-provisioner/pvc-1854f33f-7dd6-4837-ad14-36e6a18701c4_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-885631 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-885631 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-arm64 -p addons-885631 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.123168171s)
--- PASS: TestAddons/parallel/LocalPath (52.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7cvnl" [5751dee2-aa23-4e6b-9820-920308dcf9b6] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004585385s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-885631
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-pdng2" [f9d9471c-3dee-41c1-aa9d-3f9e53e7e47b] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003900045s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (35.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:897: volcano-admission stabilized in 4.936891ms
addons_test.go:905: volcano-controller stabilized in 5.74045ms
addons_test.go:889: volcano-scheduler stabilized in 6.359315ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-765f888978-9hfhq" [c672baf4-63e6-4d5a-9380-ed268700f90d] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.005830431s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-7b497cf95b-szz4g" [1ca526fe-1e1e-405a-92ec-bde4b92b9ff5] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.004725825s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controller-86c5446455-r6rsm" [475e03e8-2515-4469-908a-003c2f0f4974] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003699939s
addons_test.go:924: (dbg) Run:  kubectl --context addons-885631 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-885631 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-885631 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [2efde3d5-00a8-4472-aaeb-49c9e4ed38a4] Pending
helpers_test.go:344: "test-job-nginx-0" [2efde3d5-00a8-4472-aaeb-49c9e4ed38a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [2efde3d5-00a8-4472-aaeb-49c9e4ed38a4] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 10.00318331s
addons_test.go:960: (dbg) Run:  out/minikube-linux-arm64 -p addons-885631 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-linux-arm64 -p addons-885631 addons disable volcano --alsologtostderr -v=1: (9.731595472s)
--- PASS: TestAddons/parallel/Volcano (35.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-885631 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-885631 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.13s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-885631
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-885631: (10.828664748s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-885631
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-885631
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-885631
--- PASS: TestAddons/StoppedEnableDisable (11.13s)

                                                
                                    
x
+
TestCertOptions (40.99s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-752404 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-752404 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (38.171588254s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-752404 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-752404 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-752404 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-752404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-752404
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-752404: (2.164610711s)
--- PASS: TestCertOptions (40.99s)

                                                
                                    
x
+
TestCertExpiration (250.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-006308 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-006308 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (44.840570436s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-006308 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-006308 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (23.34378798s)
helpers_test.go:175: Cleaning up "cert-expiration-006308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-006308
E0528 21:45:43.860281 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-006308: (2.554422797s)
--- PASS: TestCertExpiration (250.74s)

                                                
                                    
x
+
TestDockerFlags (45.91s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-689164 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-689164 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.904947789s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-689164 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-689164 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-689164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-689164
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-689164: (2.291973466s)
--- PASS: TestDockerFlags (45.91s)

                                                
                                    
x
+
TestForceSystemdFlag (48.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-738296 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-738296 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (46.067011021s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-738296 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-738296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-738296
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-738296: (2.355158253s)
--- PASS: TestForceSystemdFlag (48.90s)

                                                
                                    
x
+
TestForceSystemdEnv (39.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-866819 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-866819 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (36.527542774s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-866819 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-866819" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-866819
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-866819: (2.281068202s)
--- PASS: TestForceSystemdEnv (39.21s)

                                                
                                    
x
+
TestErrorSpam/setup (30.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-561363 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-561363 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-561363 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-561363 --driver=docker  --container-runtime=docker: (30.60462393s)
--- PASS: TestErrorSpam/setup (30.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 pause
--- PASS: TestErrorSpam/pause (1.28s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 unpause
--- PASS: TestErrorSpam/unpause (1.39s)

                                                
                                    
x
+
TestErrorSpam/stop (10.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 stop: (10.791730504s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-561363 --log_dir /tmp/nospam-561363 stop
--- PASS: TestErrorSpam/stop (10.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18966-1064873/.minikube/files/etc/test/nested/copy/1070309/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.31s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-409073 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E0528 21:05:43.858385 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:05:43.864755 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:05:43.875095 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:05:43.895429 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:05:43.935829 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:05:44.016283 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:05:44.176731 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:05:44.497319 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:05:45.138181 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-409073 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m25.308022306s)
--- PASS: TestFunctional/serial/StartWithProxy (85.31s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-409073 --alsologtostderr -v=8
E0528 21:05:46.418598 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:05:48.979696 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:05:54.100184 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:06:04.340428 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-409073 --alsologtostderr -v=8: (33.435032398s)
functional_test.go:659: soft start took 33.435523973s for "functional-409073" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-409073 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-409073 cache add registry.k8s.io/pause:3.1: (1.02051131s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-409073 cache add registry.k8s.io/pause:3.3: (1.013586767s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-409073 /tmp/TestFunctionalserialCacheCmdcacheadd_local4141642616/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 cache add minikube-local-cache-test:functional-409073
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 cache delete minikube-local-cache-test:functional-409073
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-409073
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-409073 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (312.625463ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 cache reload
E0528 21:06:24.820683 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 kubectl -- --context functional-409073 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-409073 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (119.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-409073 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0528 21:07:05.780998 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-409073 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m59.03549555s)
functional_test.go:757: restart took 1m59.035622637s for "functional-409073" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (119.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 logs
E0528 21:08:27.701902 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-409073 logs: (1.25933902s)
--- PASS: TestFunctional/serial/LogsCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 logs --file /tmp/TestFunctionalserialLogsFileCmd765409220/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-409073 logs --file /tmp/TestFunctionalserialLogsFileCmd765409220/001/logs.txt: (1.147299832s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.92s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-409073 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-409073
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-409073: exit status 115 (418.671129ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32115 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-409073 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-409073 delete -f testdata/invalidsvc.yaml: (1.249452412s)
--- PASS: TestFunctional/serial/InvalidService (4.92s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-409073 config get cpus: exit status 14 (78.397589ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-409073 config get cpus: exit status 14 (66.411743ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-409073 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-409073 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1109274: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.77s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-409073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-409073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (175.150979ms)

                                                
                                                
-- stdout --
	* [functional-409073] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:09:06.927592 1108895 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:09:06.927786 1108895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:09:06.927818 1108895 out.go:304] Setting ErrFile to fd 2...
	I0528 21:09:06.927839 1108895 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:09:06.928109 1108895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	I0528 21:09:06.928545 1108895 out.go:298] Setting JSON to false
	I0528 21:09:06.929608 1108895 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17496,"bootTime":1716913051,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0528 21:09:06.929731 1108895 start.go:139] virtualization:  
	I0528 21:09:06.932251 1108895 out.go:177] * [functional-409073] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0528 21:09:06.934503 1108895 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:09:06.936389 1108895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:09:06.934611 1108895 notify.go:220] Checking for updates...
	I0528 21:09:06.940164 1108895 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 21:09:06.942157 1108895 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	I0528 21:09:06.944261 1108895 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 21:09:06.946007 1108895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:09:06.948326 1108895 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 21:09:06.948917 1108895 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:09:06.970843 1108895 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 21:09:06.970972 1108895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:09:07.038630 1108895 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 21:09:07.028470012 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:09:07.038746 1108895 docker.go:295] overlay module found
	I0528 21:09:07.041347 1108895 out.go:177] * Using the docker driver based on existing profile
	I0528 21:09:07.043479 1108895 start.go:297] selected driver: docker
	I0528 21:09:07.043501 1108895 start.go:901] validating driver "docker" against &{Name:functional-409073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-409073 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:09:07.043597 1108895 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:09:07.046273 1108895 out.go:177] 
	W0528 21:09:07.047960 1108895 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0528 21:09:07.049775 1108895 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-409073 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-409073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-409073 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (257.697752ms)

                                                
                                                
-- stdout --
	* [functional-409073] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:09:06.715724 1108852 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:09:06.715984 1108852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:09:06.716015 1108852 out.go:304] Setting ErrFile to fd 2...
	I0528 21:09:06.716037 1108852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:09:06.716416 1108852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	I0528 21:09:06.716842 1108852 out.go:298] Setting JSON to false
	I0528 21:09:06.717984 1108852 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17496,"bootTime":1716913051,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0528 21:09:06.718111 1108852 start.go:139] virtualization:  
	I0528 21:09:06.721115 1108852 out.go:177] * [functional-409073] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0528 21:09:06.723597 1108852 out.go:177]   - MINIKUBE_LOCATION=18966
	I0528 21:09:06.723662 1108852 notify.go:220] Checking for updates...
	I0528 21:09:06.728635 1108852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0528 21:09:06.730866 1108852 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	I0528 21:09:06.732577 1108852 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	I0528 21:09:06.734590 1108852 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0528 21:09:06.736529 1108852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0528 21:09:06.738877 1108852 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 21:09:06.739496 1108852 driver.go:392] Setting default libvirt URI to qemu:///system
	I0528 21:09:06.785830 1108852 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0528 21:09:06.785940 1108852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:09:06.861087 1108852 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-28 21:09:06.851145718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:09:06.861193 1108852 docker.go:295] overlay module found
	I0528 21:09:06.863335 1108852 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0528 21:09:06.865317 1108852 start.go:297] selected driver: docker
	I0528 21:09:06.865334 1108852 start.go:901] validating driver "docker" against &{Name:functional-409073 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-409073 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0528 21:09:06.865439 1108852 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0528 21:09:06.868144 1108852 out.go:177] 
	W0528 21:09:06.870080 1108852 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0528 21:09:06.874243 1108852 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-409073 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-409073 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-b7tdc" [081257fc-7ec1-424b-b4cc-4f4979e0428c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-b7tdc" [081257fc-7ec1-424b-b4cc-4f4979e0428c] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004545105s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32359
functional_test.go:1671: http://192.168.49.2:32359: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-b7tdc

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32359
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [29db9576-4483-4105-900d-513a47525eff] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004534867s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-409073 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-409073 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-409073 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-409073 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3a19ce8f-fb46-40c7-a105-b4c7600dbcb5] Pending
helpers_test.go:344: "sp-pod" [3a19ce8f-fb46-40c7-a105-b4c7600dbcb5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3a19ce8f-fb46-40c7-a105-b4c7600dbcb5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003726159s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-409073 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-409073 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-409073 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [304bcffc-dd1b-4b79-917f-e1dbeffbb595] Pending
helpers_test.go:344: "sp-pod" [304bcffc-dd1b-4b79-917f-e1dbeffbb595] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [304bcffc-dd1b-4b79-917f-e1dbeffbb595] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004524307s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-409073 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh -n functional-409073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 cp functional-409073:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3644083879/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh -n functional-409073 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh -n functional-409073 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1070309/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "sudo cat /etc/test/nested/copy/1070309/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1070309.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "sudo cat /etc/ssl/certs/1070309.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1070309.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "sudo cat /usr/share/ca-certificates/1070309.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/10703092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "sudo cat /etc/ssl/certs/10703092.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/10703092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "sudo cat /usr/share/ca-certificates/10703092.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-409073 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-409073 ssh "sudo systemctl is-active crio": exit status 1 (319.029739ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-409073 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-409073 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-409073 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1106466: os: process already finished
helpers_test.go:502: unable to terminate pid 1106298: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-409073 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-409073 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-409073 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fca2bc53-8421-4204-88ef-ad37d6e829cf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fca2bc53-8421-4204-88ef-ad37d6e829cf] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004537682s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-409073 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.237.201 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-409073 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-409073 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-409073 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-rhkp5" [4f8ae2e6-d0ae-441c-84ff-f87456185610] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-rhkp5" [4f8ae2e6-d0ae-441c-84ff-f87456185610] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003904355s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 service list -o json
functional_test.go:1490: Took "557.76926ms" to run "out/minikube-linux-arm64 -p functional-409073 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "370.328687ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "86.452879ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30513
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "414.748588ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "90.942051ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-409073 /tmp/TestFunctionalparallelMountCmdany-port1980963517/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716930544680954803" to /tmp/TestFunctionalparallelMountCmdany-port1980963517/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716930544680954803" to /tmp/TestFunctionalparallelMountCmdany-port1980963517/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716930544680954803" to /tmp/TestFunctionalparallelMountCmdany-port1980963517/001/test-1716930544680954803
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-409073 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (472.936321ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 28 21:09 created-by-test
-rw-r--r-- 1 docker docker 24 May 28 21:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 28 21:09 test-1716930544680954803
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh cat /mount-9p/test-1716930544680954803
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-409073 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [11171344-7927-403a-b6e7-adb3d95efc89] Pending
helpers_test.go:344: "busybox-mount" [11171344-7927-403a-b6e7-adb3d95efc89] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [11171344-7927-403a-b6e7-adb3d95efc89] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [11171344-7927-403a-b6e7-adb3d95efc89] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004099269s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-409073 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-409073 /tmp/TestFunctionalparallelMountCmdany-port1980963517/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30513
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-409073 /tmp/TestFunctionalparallelMountCmdspecific-port787208375/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-409073 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (490.808878ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-409073 /tmp/TestFunctionalparallelMountCmdspecific-port787208375/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-409073 ssh "sudo umount -f /mount-9p": exit status 1 (354.155585ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-409073 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-409073 /tmp/TestFunctionalparallelMountCmdspecific-port787208375/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-409073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107917067/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-409073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107917067/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-409073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107917067/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-409073 ssh "findmnt -T" /mount1: exit status 1 (828.783624ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-409073 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-409073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107917067/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-409073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107917067/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-409073 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3107917067/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-409073 version -o=json --components: (1.229220535s)
--- PASS: TestFunctional/parallel/Version/components (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-409073 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-409073
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-409073
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-409073 image ls --format short --alsologtostderr:
I0528 21:09:37.276876 1111891 out.go:291] Setting OutFile to fd 1 ...
I0528 21:09:37.277045 1111891 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:09:37.277053 1111891 out.go:304] Setting ErrFile to fd 2...
I0528 21:09:37.277073 1111891 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:09:37.277453 1111891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
I0528 21:09:37.278528 1111891 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 21:09:37.278709 1111891 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 21:09:37.279761 1111891 cli_runner.go:164] Run: docker container inspect functional-409073 --format={{.State.Status}}
I0528 21:09:37.302182 1111891 ssh_runner.go:195] Run: systemctl --version
I0528 21:09:37.302239 1111891 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
I0528 21:09:37.331537 1111891 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
I0528 21:09:37.426404 1111891 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-409073 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.30.1           | 163ff818d154d | 60.5MB |
| docker.io/library/nginx                     | latest            | 8dd77ef2d82ea | 193MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.1           | 234ac56e455be | 107MB  |
| registry.k8s.io/kube-apiserver              | v1.30.1           | 988b55d423baf | 112MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/minikube-local-cache-test | functional-409073 | 851b8fa219634 | 30B    |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/google-containers/addon-resizer      | functional-409073 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | 9d6767b714bf1 | 49.7MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-proxy                  | v1.30.1           | 05eccb821e159 | 87.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-409073 image ls --format table --alsologtostderr:
I0528 21:09:37.846393 1112028 out.go:291] Setting OutFile to fd 1 ...
I0528 21:09:37.846579 1112028 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:09:37.846590 1112028 out.go:304] Setting ErrFile to fd 2...
I0528 21:09:37.846596 1112028 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:09:37.846875 1112028 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
I0528 21:09:37.847539 1112028 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 21:09:37.847658 1112028 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 21:09:37.848125 1112028 cli_runner.go:164] Run: docker container inspect functional-409073 --format={{.State.Status}}
I0528 21:09:37.865788 1112028 ssh_runner.go:195] Run: systemctl --version
I0528 21:09:37.865849 1112028 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
I0528 21:09:37.904192 1112028 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
I0528 21:09:37.996091 1112028 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-409073 image ls --format json --alsologtostderr:
[{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"851b8fa21963411d839c37cb92a579c4d775b20b1d266a6466d15e527df69b49","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-409073"],"size":"30"},{"id":"163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"60500000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae399
2a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"107000000"},{"id":"05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"87900000"},{"id":"8dd77ef2d82eade8dcf2c08ea032bd9cba04c9d28ace2ccf08ad6804c27bf14f","repoDigests":[],"repoTags":["docker.i
o/library/nginx:latest"],"size":"193000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"112000000"},{"id":"9d6767b714bf1ecd2cdab75b590f2c572ac33743c7786ef5d619f7b088dbe0bb","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"49700000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-4
09073"],"size":"32900000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-409073 image ls --format json --alsologtostderr:
I0528 21:09:37.585267 1111955 out.go:291] Setting OutFile to fd 1 ...
I0528 21:09:37.585493 1111955 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:09:37.585520 1111955 out.go:304] Setting ErrFile to fd 2...
I0528 21:09:37.585538 1111955 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:09:37.585791 1111955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
I0528 21:09:37.586511 1111955 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 21:09:37.586661 1111955 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 21:09:37.587197 1111955 cli_runner.go:164] Run: docker container inspect functional-409073 --format={{.State.Status}}
I0528 21:09:37.621286 1111955 ssh_runner.go:195] Run: systemctl --version
I0528 21:09:37.621349 1111955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
I0528 21:09:37.659252 1111955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
I0528 21:09:37.750746 1111955 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-409073 image ls --format yaml --alsologtostderr:
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-409073
size: "32900000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8dd77ef2d82eade8dcf2c08ea032bd9cba04c9d28ace2ccf08ad6804c27bf14f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "112000000"
- id: 234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "107000000"
- id: 9d6767b714bf1ecd2cdab75b590f2c572ac33743c7786ef5d619f7b088dbe0bb
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "49700000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 851b8fa21963411d839c37cb92a579c4d775b20b1d266a6466d15e527df69b49
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-409073
size: "30"
- id: 163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "60500000"
- id: 05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "87900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-409073 image ls --format yaml --alsologtostderr:
I0528 21:09:37.294803 1111892 out.go:291] Setting OutFile to fd 1 ...
I0528 21:09:37.295792 1111892 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:09:37.295839 1111892 out.go:304] Setting ErrFile to fd 2...
I0528 21:09:37.295860 1111892 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:09:37.296133 1111892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
I0528 21:09:37.296836 1111892 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 21:09:37.298497 1111892 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 21:09:37.300588 1111892 cli_runner.go:164] Run: docker container inspect functional-409073 --format={{.State.Status}}
I0528 21:09:37.320066 1111892 ssh_runner.go:195] Run: systemctl --version
I0528 21:09:37.320126 1111892 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
I0528 21:09:37.340421 1111892 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
I0528 21:09:37.443263 1111892 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-409073 ssh pgrep buildkitd: exit status 1 (342.726557ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image build -t localhost/my-image:functional-409073 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-409073 image build -t localhost/my-image:functional-409073 testdata/build --alsologtostderr: (2.124472821s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-409073 image build -t localhost/my-image:functional-409073 testdata/build --alsologtostderr:
I0528 21:09:37.907110 1112034 out.go:291] Setting OutFile to fd 1 ...
I0528 21:09:37.907990 1112034 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:09:37.908031 1112034 out.go:304] Setting ErrFile to fd 2...
I0528 21:09:37.908052 1112034 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0528 21:09:37.908408 1112034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
I0528 21:09:37.909280 1112034 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 21:09:37.909972 1112034 config.go:182] Loaded profile config "functional-409073": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0528 21:09:37.910555 1112034 cli_runner.go:164] Run: docker container inspect functional-409073 --format={{.State.Status}}
I0528 21:09:37.931337 1112034 ssh_runner.go:195] Run: systemctl --version
I0528 21:09:37.931389 1112034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409073
I0528 21:09:37.955236 1112034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/functional-409073/id_rsa Username:docker}
I0528 21:09:38.043134 1112034 build_images.go:161] Building image from path: /tmp/build.1719913306.tar
I0528 21:09:38.043229 1112034 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0528 21:09:38.054009 1112034 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1719913306.tar
I0528 21:09:38.058685 1112034 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1719913306.tar: stat -c "%s %y" /var/lib/minikube/build/build.1719913306.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1719913306.tar': No such file or directory
I0528 21:09:38.058727 1112034 ssh_runner.go:362] scp /tmp/build.1719913306.tar --> /var/lib/minikube/build/build.1719913306.tar (3072 bytes)
I0528 21:09:38.087378 1112034 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1719913306
I0528 21:09:38.096913 1112034 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1719913306 -xf /var/lib/minikube/build/build.1719913306.tar
I0528 21:09:38.105881 1112034 docker.go:360] Building image: /var/lib/minikube/build/build.1719913306
I0528 21:09:38.105993 1112034 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-409073 /var/lib/minikube/build/build.1719913306
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:87076ae1fc1aec393a7cd160f9670f028ef296949fa09fe676eff91005b7a405 done
#8 naming to localhost/my-image:functional-409073 done
#8 DONE 0.1s
I0528 21:09:39.900036 1112034 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-409073 /var/lib/minikube/build/build.1719913306: (1.794013703s)
I0528 21:09:39.900111 1112034 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1719913306
I0528 21:09:39.909595 1112034 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1719913306.tar
I0528 21:09:39.919048 1112034 build_images.go:217] Built localhost/my-image:functional-409073 from /tmp/build.1719913306.tar
I0528 21:09:39.919079 1112034 build_images.go:133] succeeded building to: functional-409073
I0528 21:09:39.919084 1112034 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2024/05/28 21:09:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.447124034s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-409073
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-409073 docker-env) && out/minikube-linux-arm64 status -p functional-409073"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-409073 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image load --daemon gcr.io/google-containers/addon-resizer:functional-409073 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-409073 image load --daemon gcr.io/google-containers/addon-resizer:functional-409073 --alsologtostderr: (4.101500273s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image load --daemon gcr.io/google-containers/addon-resizer:functional-409073 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-409073 image load --daemon gcr.io/google-containers/addon-resizer:functional-409073 --alsologtostderr: (2.618326724s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.465157018s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-409073
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image load --daemon gcr.io/google-containers/addon-resizer:functional-409073 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-409073 image load --daemon gcr.io/google-containers/addon-resizer:functional-409073 --alsologtostderr: (3.267086283s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image save gcr.io/google-containers/addon-resizer:functional-409073 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image rm gcr.io/google-containers/addon-resizer:functional-409073 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-409073 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.042083101s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-409073
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-409073 image save --daemon gcr.io/google-containers/addon-resizer:functional-409073 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-409073
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-409073
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-409073
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-409073
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (136.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-055731 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0528 21:10:43.858915 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:11:11.542405 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-055731 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m15.351732966s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (136.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (42.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-055731 -- rollout status deployment/busybox: (3.463530671s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.2 10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-djt87 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-hbvzj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-kjgsv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-djt87 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-hbvzj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-kjgsv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-djt87 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-hbvzj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-kjgsv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (42.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-djt87 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-djt87 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-hbvzj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-hbvzj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-kjgsv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-055731 -- exec busybox-fc5497c4f-kjgsv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (29.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-055731 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-055731 -v=7 --alsologtostderr: (28.275383128s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (29.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-055731 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-055731 status --output json -v=7 --alsologtostderr: (1.022351528s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp testdata/cp-test.txt ha-055731:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2339867535/001/cp-test_ha-055731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731:/home/docker/cp-test.txt ha-055731-m02:/home/docker/cp-test_ha-055731_ha-055731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m02 "sudo cat /home/docker/cp-test_ha-055731_ha-055731-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731:/home/docker/cp-test.txt ha-055731-m03:/home/docker/cp-test_ha-055731_ha-055731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m03 "sudo cat /home/docker/cp-test_ha-055731_ha-055731-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731:/home/docker/cp-test.txt ha-055731-m04:/home/docker/cp-test_ha-055731_ha-055731-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m04 "sudo cat /home/docker/cp-test_ha-055731_ha-055731-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp testdata/cp-test.txt ha-055731-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2339867535/001/cp-test_ha-055731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m02:/home/docker/cp-test.txt ha-055731:/home/docker/cp-test_ha-055731-m02_ha-055731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731 "sudo cat /home/docker/cp-test_ha-055731-m02_ha-055731.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m02:/home/docker/cp-test.txt ha-055731-m03:/home/docker/cp-test_ha-055731-m02_ha-055731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m03 "sudo cat /home/docker/cp-test_ha-055731-m02_ha-055731-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m02:/home/docker/cp-test.txt ha-055731-m04:/home/docker/cp-test_ha-055731-m02_ha-055731-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m04 "sudo cat /home/docker/cp-test_ha-055731-m02_ha-055731-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp testdata/cp-test.txt ha-055731-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2339867535/001/cp-test_ha-055731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m03:/home/docker/cp-test.txt ha-055731:/home/docker/cp-test_ha-055731-m03_ha-055731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731 "sudo cat /home/docker/cp-test_ha-055731-m03_ha-055731.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m03:/home/docker/cp-test.txt ha-055731-m02:/home/docker/cp-test_ha-055731-m03_ha-055731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m02 "sudo cat /home/docker/cp-test_ha-055731-m03_ha-055731-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m03:/home/docker/cp-test.txt ha-055731-m04:/home/docker/cp-test_ha-055731-m03_ha-055731-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m04 "sudo cat /home/docker/cp-test_ha-055731-m03_ha-055731-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp testdata/cp-test.txt ha-055731-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2339867535/001/cp-test_ha-055731-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m04:/home/docker/cp-test.txt ha-055731:/home/docker/cp-test_ha-055731-m04_ha-055731.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731 "sudo cat /home/docker/cp-test_ha-055731-m04_ha-055731.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m04:/home/docker/cp-test.txt ha-055731-m02:/home/docker/cp-test_ha-055731-m04_ha-055731-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m02 "sudo cat /home/docker/cp-test_ha-055731-m04_ha-055731-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 cp ha-055731-m04:/home/docker/cp-test.txt ha-055731-m03:/home/docker/cp-test_ha-055731-m04_ha-055731-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 ssh -n ha-055731-m03 "sudo cat /home/docker/cp-test_ha-055731-m04_ha-055731-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 node stop m02 -v=7 --alsologtostderr
E0528 21:13:36.073369 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:13:36.078716 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:13:36.088992 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:13:36.109260 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:13:36.149530 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:13:36.229823 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:13:36.390176 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:13:36.710853 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:13:37.351773 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:13:38.631941 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:13:41.193551 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-055731 node stop m02 -v=7 --alsologtostderr: (11.003718044s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr: exit status 7 (765.077467ms)

                                                
                                                
-- stdout --
	ha-055731
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055731-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-055731-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-055731-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:13:43.507366 1132988 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:13:43.507573 1132988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:13:43.507588 1132988 out.go:304] Setting ErrFile to fd 2...
	I0528 21:13:43.507593 1132988 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:13:43.507886 1132988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	I0528 21:13:43.508128 1132988 out.go:298] Setting JSON to false
	I0528 21:13:43.508206 1132988 notify.go:220] Checking for updates...
	I0528 21:13:43.509073 1132988 mustload.go:65] Loading cluster: ha-055731
	I0528 21:13:43.509632 1132988 config.go:182] Loaded profile config "ha-055731": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 21:13:43.509699 1132988 status.go:255] checking status of ha-055731 ...
	I0528 21:13:43.510942 1132988 cli_runner.go:164] Run: docker container inspect ha-055731 --format={{.State.Status}}
	I0528 21:13:43.532079 1132988 status.go:330] ha-055731 host status = "Running" (err=<nil>)
	I0528 21:13:43.532107 1132988 host.go:66] Checking if "ha-055731" exists ...
	I0528 21:13:43.532579 1132988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-055731
	I0528 21:13:43.550605 1132988 host.go:66] Checking if "ha-055731" exists ...
	I0528 21:13:43.551030 1132988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:13:43.551116 1132988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-055731
	I0528 21:13:43.585219 1132988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33945 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/ha-055731/id_rsa Username:docker}
	I0528 21:13:43.679424 1132988 ssh_runner.go:195] Run: systemctl --version
	I0528 21:13:43.683820 1132988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:13:43.696219 1132988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:13:43.773072 1132988 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-05-28 21:13:43.763574175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:13:43.773761 1132988 kubeconfig.go:125] found "ha-055731" server: "https://192.168.49.254:8443"
	I0528 21:13:43.773808 1132988 api_server.go:166] Checking apiserver status ...
	I0528 21:13:43.773852 1132988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:13:43.785385 1132988 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2153/cgroup
	I0528 21:13:43.795137 1132988 api_server.go:182] apiserver freezer: "5:freezer:/docker/a9401fccc00ddaca80824ed564b994f3541fb890bca482642cedce3d39d9a3f2/kubepods/burstable/pod798ba885544c904fc1a664797f8eaafe/7c5412ddf8cb70da6bb5b2de598ee0cbd1c2b8392e1987d270fdced3d1a4dbc1"
	I0528 21:13:43.795212 1132988 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a9401fccc00ddaca80824ed564b994f3541fb890bca482642cedce3d39d9a3f2/kubepods/burstable/pod798ba885544c904fc1a664797f8eaafe/7c5412ddf8cb70da6bb5b2de598ee0cbd1c2b8392e1987d270fdced3d1a4dbc1/freezer.state
	I0528 21:13:43.803568 1132988 api_server.go:204] freezer state: "THAWED"
	I0528 21:13:43.803599 1132988 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0528 21:13:43.811456 1132988 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0528 21:13:43.811485 1132988 status.go:422] ha-055731 apiserver status = Running (err=<nil>)
	I0528 21:13:43.811496 1132988 status.go:257] ha-055731 status: &{Name:ha-055731 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:13:43.811514 1132988 status.go:255] checking status of ha-055731-m02 ...
	I0528 21:13:43.811818 1132988 cli_runner.go:164] Run: docker container inspect ha-055731-m02 --format={{.State.Status}}
	I0528 21:13:43.827916 1132988 status.go:330] ha-055731-m02 host status = "Stopped" (err=<nil>)
	I0528 21:13:43.827953 1132988 status.go:343] host is not running, skipping remaining checks
	I0528 21:13:43.827961 1132988 status.go:257] ha-055731-m02 status: &{Name:ha-055731-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:13:43.827981 1132988 status.go:255] checking status of ha-055731-m03 ...
	I0528 21:13:43.828384 1132988 cli_runner.go:164] Run: docker container inspect ha-055731-m03 --format={{.State.Status}}
	I0528 21:13:43.844789 1132988 status.go:330] ha-055731-m03 host status = "Running" (err=<nil>)
	I0528 21:13:43.844815 1132988 host.go:66] Checking if "ha-055731-m03" exists ...
	I0528 21:13:43.845245 1132988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-055731-m03
	I0528 21:13:43.866531 1132988 host.go:66] Checking if "ha-055731-m03" exists ...
	I0528 21:13:43.866850 1132988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:13:43.866891 1132988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-055731-m03
	I0528 21:13:43.895842 1132988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33955 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/ha-055731-m03/id_rsa Username:docker}
	I0528 21:13:43.989656 1132988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:13:44.003237 1132988 kubeconfig.go:125] found "ha-055731" server: "https://192.168.49.254:8443"
	I0528 21:13:44.003271 1132988 api_server.go:166] Checking apiserver status ...
	I0528 21:13:44.003341 1132988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:13:44.016551 1132988 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2046/cgroup
	I0528 21:13:44.030004 1132988 api_server.go:182] apiserver freezer: "5:freezer:/docker/3ac27c68e68252ae17241bf513045de353fb8c717da465838196d5f37150b344/kubepods/burstable/podc42c90322ea349b6118ac77d5b436655/a12afc2c52adab7884514d001ca99e2eb909e475c56b4e5d385dd320b56716eb"
	I0528 21:13:44.030170 1132988 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3ac27c68e68252ae17241bf513045de353fb8c717da465838196d5f37150b344/kubepods/burstable/podc42c90322ea349b6118ac77d5b436655/a12afc2c52adab7884514d001ca99e2eb909e475c56b4e5d385dd320b56716eb/freezer.state
	I0528 21:13:44.047394 1132988 api_server.go:204] freezer state: "THAWED"
	I0528 21:13:44.047464 1132988 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0528 21:13:44.056940 1132988 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0528 21:13:44.057008 1132988 status.go:422] ha-055731-m03 apiserver status = Running (err=<nil>)
	I0528 21:13:44.057033 1132988 status.go:257] ha-055731-m03 status: &{Name:ha-055731-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:13:44.057066 1132988 status.go:255] checking status of ha-055731-m04 ...
	I0528 21:13:44.057423 1132988 cli_runner.go:164] Run: docker container inspect ha-055731-m04 --format={{.State.Status}}
	I0528 21:13:44.077313 1132988 status.go:330] ha-055731-m04 host status = "Running" (err=<nil>)
	I0528 21:13:44.077337 1132988 host.go:66] Checking if "ha-055731-m04" exists ...
	I0528 21:13:44.077634 1132988 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-055731-m04
	I0528 21:13:44.099088 1132988 host.go:66] Checking if "ha-055731-m04" exists ...
	I0528 21:13:44.099402 1132988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:13:44.099448 1132988 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-055731-m04
	I0528 21:13:44.120489 1132988 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33960 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/ha-055731-m04/id_rsa Username:docker}
	I0528 21:13:44.207520 1132988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:13:44.220245 1132988 status.go:257] ha-055731-m04 status: &{Name:ha-055731-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (55.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 node start m02 -v=7 --alsologtostderr
E0528 21:13:46.314229 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:13:56.554516 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:14:17.035231 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-055731 node start m02 -v=7 --alsologtostderr: (54.246634635s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr: (1.055290259s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (55.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-055731 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-055731 -v=7 --alsologtostderr
E0528 21:14:57.995440 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-055731 -v=7 --alsologtostderr: (34.542547683s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-055731 --wait=true -v=7 --alsologtostderr
E0528 21:15:43.858368 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 21:16:19.915888 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-055731 --wait=true -v=7 --alsologtostderr: (3m12.495820666s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-055731
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (227.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 node delete m03 -v=7 --alsologtostderr
E0528 21:18:36.072980 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-055731 node delete m03 -v=7 --alsologtostderr: (10.959165646s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 stop -v=7 --alsologtostderr
E0528 21:19:03.756485 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-055731 stop -v=7 --alsologtostderr: (32.581553099s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr: exit status 7 (97.147116ms)

                                                
                                                
-- stdout --
	ha-055731
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-055731-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-055731-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:19:13.187864 1158730 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:19:13.188062 1158730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:19:13.188093 1158730 out.go:304] Setting ErrFile to fd 2...
	I0528 21:19:13.188113 1158730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:19:13.188404 1158730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	I0528 21:19:13.188653 1158730 out.go:298] Setting JSON to false
	I0528 21:19:13.188714 1158730 mustload.go:65] Loading cluster: ha-055731
	I0528 21:19:13.188791 1158730 notify.go:220] Checking for updates...
	I0528 21:19:13.189201 1158730 config.go:182] Loaded profile config "ha-055731": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 21:19:13.189242 1158730 status.go:255] checking status of ha-055731 ...
	I0528 21:19:13.189771 1158730 cli_runner.go:164] Run: docker container inspect ha-055731 --format={{.State.Status}}
	I0528 21:19:13.207283 1158730 status.go:330] ha-055731 host status = "Stopped" (err=<nil>)
	I0528 21:19:13.207303 1158730 status.go:343] host is not running, skipping remaining checks
	I0528 21:19:13.207311 1158730 status.go:257] ha-055731 status: &{Name:ha-055731 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:19:13.207333 1158730 status.go:255] checking status of ha-055731-m02 ...
	I0528 21:19:13.207620 1158730 cli_runner.go:164] Run: docker container inspect ha-055731-m02 --format={{.State.Status}}
	I0528 21:19:13.224025 1158730 status.go:330] ha-055731-m02 host status = "Stopped" (err=<nil>)
	I0528 21:19:13.224053 1158730 status.go:343] host is not running, skipping remaining checks
	I0528 21:19:13.224072 1158730 status.go:257] ha-055731-m02 status: &{Name:ha-055731-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:19:13.224098 1158730 status.go:255] checking status of ha-055731-m04 ...
	I0528 21:19:13.224389 1158730 cli_runner.go:164] Run: docker container inspect ha-055731-m04 --format={{.State.Status}}
	I0528 21:19:13.239692 1158730 status.go:330] ha-055731-m04 host status = "Stopped" (err=<nil>)
	I0528 21:19:13.239711 1158730 status.go:343] host is not running, skipping remaining checks
	I0528 21:19:13.239718 1158730 status.go:257] ha-055731-m04 status: &{Name:ha-055731-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (82.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-055731 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-055731 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m21.133506833s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (82.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-055731 --control-plane -v=7 --alsologtostderr
E0528 21:20:43.858331 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-055731 --control-plane -v=7 --alsologtostderr: (43.305715343s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-055731 status -v=7 --alsologtostderr: (1.04890218s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.78s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-816949 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-816949 --driver=docker  --container-runtime=docker: (31.939732139s)
--- PASS: TestImageBuild/serial/Setup (31.94s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-816949
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-816949: (1.889345611s)
--- PASS: TestImageBuild/serial/NormalBuild (1.89s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-816949
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-816949
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.70s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-816949
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.44s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-978993 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-978993 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (46.432749398s)
--- PASS: TestJSONOutput/start/Command (46.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-978993 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-978993 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-978993 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-978993 --output=json --user=testUser: (10.880934416s)
--- PASS: TestJSONOutput/stop/Command (10.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-134821 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-134821 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.984441ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f14dce6a-0582-432d-8416-bbe555c6334b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-134821] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"142806bc-9816-40bb-84fc-79127f8b0004","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18966"}}
	{"specversion":"1.0","id":"80c890a0-a053-4f3c-b4cf-271734d0e168","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cfd56dbd-7552-455c-a5f8-9620f3f2ccc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig"}}
	{"specversion":"1.0","id":"439ad0d5-1819-408c-80fd-086e8771cbc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube"}}
	{"specversion":"1.0","id":"dc16c588-8eab-45b9-bb60-a11870d2e400","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"733a4f43-9ce1-4ef1-853c-d2066bc98199","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c6653412-5e7a-4c6f-ab0a-411faf379b60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-134821" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-134821
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-794907 --network=
E0528 21:23:36.072943 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-794907 --network=: (31.186185473s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-794907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-794907
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-794907: (2.112327696s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-534085 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-534085 --network=bridge: (31.007357265s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-534085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-534085
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-534085: (1.955634727s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.98s)

                                                
                                    
x
+
TestKicExistingNetwork (37.8s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-098810 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-098810 --network=existing-network: (35.641333936s)
helpers_test.go:175: Cleaning up "existing-network-098810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-098810
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-098810: (2.016862072s)
--- PASS: TestKicExistingNetwork (37.80s)

                                                
                                    
x
+
TestKicCustomSubnet (31.98s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-651619 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-651619 --subnet=192.168.60.0/24: (29.898279232s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-651619 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-651619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-651619
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-651619: (2.056391457s)
--- PASS: TestKicCustomSubnet (31.98s)

                                                
                                    
x
+
TestKicStaticIP (34.53s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-317752 --static-ip=192.168.200.200
E0528 21:25:43.858266 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-317752 --static-ip=192.168.200.200: (32.367270706s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-317752 ip
helpers_test.go:175: Cleaning up "static-ip-317752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-317752
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-317752: (2.02157805s)
--- PASS: TestKicStaticIP (34.53s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-636011 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-636011 --driver=docker  --container-runtime=docker: (30.157956733s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-638866 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-638866 --driver=docker  --container-runtime=docker: (34.351530078s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-636011
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-638866
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-638866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-638866
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-638866: (2.107272698s)
helpers_test.go:175: Cleaning up "first-636011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-636011
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-636011: (2.085981073s)
--- PASS: TestMinikubeProfile (69.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-053335 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-053335 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.493583022s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-053335 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-068867 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-068867 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.585134769s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-068867 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-053335 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-053335 --alsologtostderr -v=5: (1.468825408s)
--- PASS: TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-068867 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-068867
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-068867: (1.211375663s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.65s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-068867
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-068867: (7.648166938s)
--- PASS: TestMountStart/serial/RestartStopped (8.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-068867 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-508771 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0528 21:28:36.072990 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-508771 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m6.424053349s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (37.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-508771 -- rollout status deployment/busybox: (4.165130457s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- exec busybox-fc5497c4f-7hfv8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- exec busybox-fc5497c4f-sl47g -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- exec busybox-fc5497c4f-7hfv8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- exec busybox-fc5497c4f-sl47g -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- exec busybox-fc5497c4f-7hfv8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- exec busybox-fc5497c4f-sl47g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (37.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- exec busybox-fc5497c4f-7hfv8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- exec busybox-fc5497c4f-7hfv8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- exec busybox-fc5497c4f-sl47g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-508771 -- exec busybox-fc5497c4f-sl47g -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-508771 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-508771 -v 3 --alsologtostderr: (16.856220403s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-508771 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp testdata/cp-test.txt multinode-508771:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp multinode-508771:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3998188018/001/cp-test_multinode-508771.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp multinode-508771:/home/docker/cp-test.txt multinode-508771-m02:/home/docker/cp-test_multinode-508771_multinode-508771-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m02 "sudo cat /home/docker/cp-test_multinode-508771_multinode-508771-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp multinode-508771:/home/docker/cp-test.txt multinode-508771-m03:/home/docker/cp-test_multinode-508771_multinode-508771-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m03 "sudo cat /home/docker/cp-test_multinode-508771_multinode-508771-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp testdata/cp-test.txt multinode-508771-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp multinode-508771-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3998188018/001/cp-test_multinode-508771-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp multinode-508771-m02:/home/docker/cp-test.txt multinode-508771:/home/docker/cp-test_multinode-508771-m02_multinode-508771.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771 "sudo cat /home/docker/cp-test_multinode-508771-m02_multinode-508771.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp multinode-508771-m02:/home/docker/cp-test.txt multinode-508771-m03:/home/docker/cp-test_multinode-508771-m02_multinode-508771-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m03 "sudo cat /home/docker/cp-test_multinode-508771-m02_multinode-508771-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp testdata/cp-test.txt multinode-508771-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp multinode-508771-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3998188018/001/cp-test_multinode-508771-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp multinode-508771-m03:/home/docker/cp-test.txt multinode-508771:/home/docker/cp-test_multinode-508771-m03_multinode-508771.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771 "sudo cat /home/docker/cp-test_multinode-508771-m03_multinode-508771.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 cp multinode-508771-m03:/home/docker/cp-test.txt multinode-508771-m02:/home/docker/cp-test_multinode-508771-m03_multinode-508771-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 ssh -n multinode-508771-m02 "sudo cat /home/docker/cp-test_multinode-508771-m03_multinode-508771-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-508771 node stop m03: (1.211697681s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-508771 status: exit status 7 (511.723334ms)

                                                
                                                
-- stdout --
	multinode-508771
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-508771-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-508771-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-508771 status --alsologtostderr: exit status 7 (482.499669ms)

                                                
                                                
-- stdout --
	multinode-508771
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-508771-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-508771-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:29:53.296812 1227052 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:29:53.296985 1227052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:29:53.297015 1227052 out.go:304] Setting ErrFile to fd 2...
	I0528 21:29:53.297035 1227052 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:29:53.297277 1227052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	I0528 21:29:53.297470 1227052 out.go:298] Setting JSON to false
	I0528 21:29:53.297533 1227052 mustload.go:65] Loading cluster: multinode-508771
	I0528 21:29:53.297607 1227052 notify.go:220] Checking for updates...
	I0528 21:29:53.298071 1227052 config.go:182] Loaded profile config "multinode-508771": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 21:29:53.298103 1227052 status.go:255] checking status of multinode-508771 ...
	I0528 21:29:53.298620 1227052 cli_runner.go:164] Run: docker container inspect multinode-508771 --format={{.State.Status}}
	I0528 21:29:53.318067 1227052 status.go:330] multinode-508771 host status = "Running" (err=<nil>)
	I0528 21:29:53.318100 1227052 host.go:66] Checking if "multinode-508771" exists ...
	I0528 21:29:53.318386 1227052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-508771
	I0528 21:29:53.335374 1227052 host.go:66] Checking if "multinode-508771" exists ...
	I0528 21:29:53.335715 1227052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:29:53.335793 1227052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-508771
	I0528 21:29:53.356587 1227052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34072 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/multinode-508771/id_rsa Username:docker}
	I0528 21:29:53.443106 1227052 ssh_runner.go:195] Run: systemctl --version
	I0528 21:29:53.447768 1227052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:29:53.464634 1227052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0528 21:29:53.521787 1227052 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-05-28 21:29:53.512193053 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214974464 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89 Expected:8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0528 21:29:53.522453 1227052 kubeconfig.go:125] found "multinode-508771" server: "https://192.168.67.2:8443"
	I0528 21:29:53.522501 1227052 api_server.go:166] Checking apiserver status ...
	I0528 21:29:53.522548 1227052 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0528 21:29:53.533820 1227052 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2014/cgroup
	I0528 21:29:53.543911 1227052 api_server.go:182] apiserver freezer: "5:freezer:/docker/f97854fef8814d434a0ae311e6796cc5363eea6be658d346b34a68b7f6474c59/kubepods/burstable/podcf71a78b9a9801922b9cb3de90a957bc/3bd47855d83b49fc7bfcf16478ef2e11e934e9a929f5ac7a45977399229f6aa4"
	I0528 21:29:53.543993 1227052 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f97854fef8814d434a0ae311e6796cc5363eea6be658d346b34a68b7f6474c59/kubepods/burstable/podcf71a78b9a9801922b9cb3de90a957bc/3bd47855d83b49fc7bfcf16478ef2e11e934e9a929f5ac7a45977399229f6aa4/freezer.state
	I0528 21:29:53.553459 1227052 api_server.go:204] freezer state: "THAWED"
	I0528 21:29:53.553497 1227052 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0528 21:29:53.561184 1227052 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0528 21:29:53.561213 1227052 status.go:422] multinode-508771 apiserver status = Running (err=<nil>)
	I0528 21:29:53.561224 1227052 status.go:257] multinode-508771 status: &{Name:multinode-508771 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:29:53.561242 1227052 status.go:255] checking status of multinode-508771-m02 ...
	I0528 21:29:53.561541 1227052 cli_runner.go:164] Run: docker container inspect multinode-508771-m02 --format={{.State.Status}}
	I0528 21:29:53.577320 1227052 status.go:330] multinode-508771-m02 host status = "Running" (err=<nil>)
	I0528 21:29:53.577344 1227052 host.go:66] Checking if "multinode-508771-m02" exists ...
	I0528 21:29:53.577632 1227052 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-508771-m02
	I0528 21:29:53.594585 1227052 host.go:66] Checking if "multinode-508771-m02" exists ...
	I0528 21:29:53.594900 1227052 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0528 21:29:53.594948 1227052 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-508771-m02
	I0528 21:29:53.611648 1227052 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34077 SSHKeyPath:/home/jenkins/minikube-integration/18966-1064873/.minikube/machines/multinode-508771-m02/id_rsa Username:docker}
	I0528 21:29:53.699134 1227052 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0528 21:29:53.710975 1227052 status.go:257] multinode-508771-m02 status: &{Name:multinode-508771-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:29:53.711013 1227052 status.go:255] checking status of multinode-508771-m03 ...
	I0528 21:29:53.711302 1227052 cli_runner.go:164] Run: docker container inspect multinode-508771-m03 --format={{.State.Status}}
	I0528 21:29:53.727340 1227052 status.go:330] multinode-508771-m03 host status = "Stopped" (err=<nil>)
	I0528 21:29:53.727365 1227052 status.go:343] host is not running, skipping remaining checks
	I0528 21:29:53.727373 1227052 status.go:257] multinode-508771-m03 status: &{Name:multinode-508771-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 node start m03 -v=7 --alsologtostderr
E0528 21:29:59.117689 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-508771 node start m03 -v=7 --alsologtostderr: (10.241687638s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (65.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-508771
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-508771
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-508771: (22.573980826s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-508771 --wait=true -v=8 --alsologtostderr
E0528 21:30:43.858292 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-508771 --wait=true -v=8 --alsologtostderr: (43.138785586s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-508771
--- PASS: TestMultiNode/serial/RestartKeepsNodes (65.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-508771 node delete m03: (4.814159608s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-508771 stop: (21.538683682s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-508771 status: exit status 7 (88.216911ms)

                                                
                                                
-- stdout --
	multinode-508771
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-508771-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-508771 status --alsologtostderr: exit status 7 (84.946344ms)

                                                
                                                
-- stdout --
	multinode-508771
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-508771-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0528 21:31:37.701037 1238881 out.go:291] Setting OutFile to fd 1 ...
	I0528 21:31:37.701280 1238881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:31:37.701322 1238881 out.go:304] Setting ErrFile to fd 2...
	I0528 21:31:37.701342 1238881 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0528 21:31:37.701778 1238881 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18966-1064873/.minikube/bin
	I0528 21:31:37.702134 1238881 out.go:298] Setting JSON to false
	I0528 21:31:37.702191 1238881 mustload.go:65] Loading cluster: multinode-508771
	I0528 21:31:37.702985 1238881 config.go:182] Loaded profile config "multinode-508771": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0528 21:31:37.703035 1238881 status.go:255] checking status of multinode-508771 ...
	I0528 21:31:37.703862 1238881 cli_runner.go:164] Run: docker container inspect multinode-508771 --format={{.State.Status}}
	I0528 21:31:37.704459 1238881 notify.go:220] Checking for updates...
	I0528 21:31:37.721867 1238881 status.go:330] multinode-508771 host status = "Stopped" (err=<nil>)
	I0528 21:31:37.721894 1238881 status.go:343] host is not running, skipping remaining checks
	I0528 21:31:37.721903 1238881 status.go:257] multinode-508771 status: &{Name:multinode-508771 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0528 21:31:37.721938 1238881 status.go:255] checking status of multinode-508771-m02 ...
	I0528 21:31:37.722430 1238881 cli_runner.go:164] Run: docker container inspect multinode-508771-m02 --format={{.State.Status}}
	I0528 21:31:37.740549 1238881 status.go:330] multinode-508771-m02 host status = "Stopped" (err=<nil>)
	I0528 21:31:37.740577 1238881 status.go:343] host is not running, skipping remaining checks
	I0528 21:31:37.740585 1238881 status.go:257] multinode-508771-m02 status: &{Name:multinode-508771-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-508771 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-508771 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (57.077826443s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-508771 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-508771
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-508771-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-508771-m02 --driver=docker  --container-runtime=docker: exit status 14 (82.224298ms)

                                                
                                                
-- stdout --
	* [multinode-508771-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-508771-m02' is duplicated with machine name 'multinode-508771-m02' in profile 'multinode-508771'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-508771-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-508771-m03 --driver=docker  --container-runtime=docker: (36.083676995s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-508771
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-508771: exit status 80 (349.72965ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-508771 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-508771-m03 already exists in multinode-508771-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-508771-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-508771-m03: (2.078957381s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.65s)

                                                
                                    
x
+
TestPreload (108.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-718002 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0528 21:33:36.072662 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-718002 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (59.20622781s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-718002 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-718002 image pull gcr.io/k8s-minikube/busybox: (1.31911796s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-718002
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-718002: (10.923047335s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-718002 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-718002 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (34.703784443s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-718002 image list
helpers_test.go:175: Cleaning up "test-preload-718002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-718002
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-718002: (2.240830927s)
--- PASS: TestPreload (108.61s)

                                                
                                    
x
+
TestScheduledStopUnix (105.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-200114 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-200114 --memory=2048 --driver=docker  --container-runtime=docker: (32.552041969s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-200114 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-200114 -n scheduled-stop-200114
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-200114 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-200114 --cancel-scheduled
E0528 21:35:43.858652 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-200114 -n scheduled-stop-200114
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-200114
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-200114 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-200114
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-200114: exit status 7 (68.213526ms)

                                                
                                                
-- stdout --
	scheduled-stop-200114
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-200114 -n scheduled-stop-200114
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-200114 -n scheduled-stop-200114: exit status 7 (64.696762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-200114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-200114
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-200114: (1.670337509s)
--- PASS: TestScheduledStopUnix (105.76s)

                                                
                                    
x
+
TestSkaffold (120.15s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2622347048 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-378369 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-378369 --memory=2600 --driver=docker  --container-runtime=docker: (35.458649182s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2622347048 run --minikube-profile skaffold-378369 --kube-context skaffold-378369 --status-check=true --port-forward=false --interactive=false
E0528 21:38:36.073117 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2622347048 run --minikube-profile skaffold-378369 --kube-context skaffold-378369 --status-check=true --port-forward=false --interactive=false: (1m8.544817264s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-585d4bcb7c-whbhg" [309dc5df-049f-4823-9456-752333fe3f97] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004033354s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-f5c9c8495-4nltw" [4e9172f0-b975-414f-b0e0-aea5ff0be13f] Running
E0528 21:38:46.904524 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003025182s
helpers_test.go:175: Cleaning up "skaffold-378369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-378369
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-378369: (2.975499179s)
--- PASS: TestSkaffold (120.15s)

                                                
                                    
x
+
TestInsufficientStorage (10.94s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-974917 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-974917 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.681199224s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0dfb6028-df43-44fd-9d50-94fe6a9bece4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-974917] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f7c47b6f-f543-4319-88ef-92b416949ae9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18966"}}
	{"specversion":"1.0","id":"a8f59147-9b04-415e-b6a0-e1de30d74e63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6b5ba8f1-655f-418b-a49a-3ec35184f561","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig"}}
	{"specversion":"1.0","id":"0bd42be1-50a3-4011-9f6b-436b9397c661","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube"}}
	{"specversion":"1.0","id":"6cbaf600-2024-4e5c-997f-cceb38cbd933","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5f9e9b2e-91b3-4dcc-8ed9-3591405fb795","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d57fc010-2e26-41aa-b634-b1c445568c12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d2fb1132-eca0-454d-b1fe-0b803f9cc29d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7d5b05a0-aa68-4db0-8457-e2b7afb148db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"be7a68aa-0fff-4fdf-b840-67043919317f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6567906e-adcf-47cb-91fd-17751ccc8ed7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-974917\" primary control-plane node in \"insufficient-storage-974917\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bcbd4351-fd84-4d52-9534-474829029400","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1716228441-18934 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"67720f5a-6770-40d8-b37d-d8fc0d8ca466","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"185c186d-1cfc-4f73-b9df-173eb1e8905c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-974917 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-974917 --output=json --layout=cluster: exit status 7 (286.73089ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-974917","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-974917","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:39:01.577416 1270970 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-974917" does not appear in /home/jenkins/minikube-integration/18966-1064873/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-974917 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-974917 --output=json --layout=cluster: exit status 7 (277.598311ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-974917","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-974917","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0528 21:39:01.854224 1271023 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-974917" does not appear in /home/jenkins/minikube-integration/18966-1064873/kubeconfig
	E0528 21:39:01.864484 1271023 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/insufficient-storage-974917/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-974917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-974917
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-974917: (1.691174084s)
--- PASS: TestInsufficientStorage (10.94s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1860032800 start -p running-upgrade-441227 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0528 21:46:39.118670 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1860032800 start -p running-upgrade-441227 --memory=2200 --vm-driver=docker  --container-runtime=docker: (47.311962709s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-441227 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-441227 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.441798317s)
helpers_test.go:175: Cleaning up "running-upgrade-441227" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-441227
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-441227: (2.238515683s)
--- PASS: TestRunningBinaryUpgrade (84.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (368.28s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-388852 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0528 21:46:22.466692 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-388852 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (57.284817506s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-388852
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-388852: (1.370033744s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-388852 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-388852 status --format={{.Host}}: exit status 7 (89.68413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-388852 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-388852 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m42.973333466s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-388852 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-388852 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-388852 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (101.007732ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-388852] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-388852
	    minikube start -p kubernetes-upgrade-388852 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3888522 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-388852 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-388852 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-388852 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.882145899s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-388852" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-388852
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-388852: (2.452425778s)
--- PASS: TestKubernetesUpgrade (368.28s)

                                                
                                    
x
+
TestMissingContainerUpgrade (116.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.869854440 start -p missing-upgrade-458131 --memory=2200 --driver=docker  --container-runtime=docker
E0528 21:45:00.546430 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.869854440 start -p missing-upgrade-458131 --memory=2200 --driver=docker  --container-runtime=docker: (36.4457376s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-458131
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-458131: (10.432353176s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-458131
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-458131 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-458131 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m5.694063697s)
helpers_test.go:175: Cleaning up "missing-upgrade-458131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-458131
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-458131: (2.573168722s)
--- PASS: TestMissingContainerUpgrade (116.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-644352 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-644352 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (92.460726ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-644352] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18966
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18966-1064873/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18966-1064873/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-644352 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-644352 --driver=docker  --container-runtime=docker: (44.510316549s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-644352 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-644352 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-644352 --no-kubernetes --driver=docker  --container-runtime=docker: (5.848642226s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-644352 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-644352 status -o json: exit status 2 (305.427851ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-644352","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-644352
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-644352: (1.750503387s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.02s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-644352 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-644352 --no-kubernetes --driver=docker  --container-runtime=docker: (10.016295577s)
--- PASS: TestNoKubernetes/serial/Start (10.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-644352 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-644352 "sudo systemctl is-active --quiet service kubelet": exit status 1 (286.360762ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-644352
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-644352: (1.222067199s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-644352 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-644352 --driver=docker  --container-runtime=docker: (7.322957457s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-644352 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-644352 "sudo systemctl is-active --quiet service kubelet": exit status 1 (290.791073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (117.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2446709931 start -p stopped-upgrade-459550 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0528 21:43:36.072954 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:43:38.624912 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:43:38.630146 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:43:38.640371 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:43:38.660574 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:43:38.700838 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:43:38.781086 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:43:38.941445 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:43:39.261966 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:43:39.902951 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:43:41.183296 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2446709931 start -p stopped-upgrade-459550 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m13.888975994s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2446709931 -p stopped-upgrade-459550 stop
E0528 21:43:43.743517 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:43:48.864269 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2446709931 -p stopped-upgrade-459550 stop: (10.826760803s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-459550 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0528 21:43:59.105284 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:44:19.586157 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-459550 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.91029688s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (117.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-459550
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-459550: (1.362038701s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.36s)

                                                
                                    
x
+
TestPause/serial/Start (87.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-653044 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0528 21:48:36.072604 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 21:48:38.624604 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 21:49:06.307136 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-653044 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m27.515790086s)
--- PASS: TestPause/serial/Start (87.52s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-653044 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-653044 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.437067509s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.45s)

                                                
                                    
x
+
TestPause/serial/Pause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-653044 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.55s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-653044 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-653044 --output=json --layout=cluster: exit status 2 (320.500012ms)

                                                
                                                
-- stdout --
	{"Name":"pause-653044","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-653044","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-653044 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.49s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.68s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-653044 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.68s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-653044 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-653044 --alsologtostderr -v=5: (2.226929466s)
--- PASS: TestPause/serial/DeletePaused (2.23s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (14.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (14.227267528s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-653044
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-653044: exit status 1 (14.823461ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-653044: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (14.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0528 21:50:43.858887 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m27.853021775s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-608294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-608294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ddg7c" [0ba94afd-e1c1-47dd-bdde-069e3ca9cae3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ddg7c" [0ba94afd-e1c1-47dd-bdde-069e3ca9cae3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.006148268s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m11.658320875s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-608294 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m20.209602952s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xgct2" [91c91486-d793-4cb3-ab06-a992113c2c5e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004370466s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-608294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-608294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sql6c" [cb950bf4-4028-41ca-bea3-77be9d0e4cf8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-sql6c" [cb950bf4-4028-41ca-bea3-77be9d0e4cf8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003387288s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-608294 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nbh4s" [5f0efcd3-f419-45a8-9596-982d4997ed5b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.022831743s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-608294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-608294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qs74v" [0b6eedd3-6849-42a1-a343-a8ff4b1da967] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qs74v" [0b6eedd3-6849-42a1-a343-a8ff4b1da967] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.005629621s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m11.441914823s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-608294 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (52.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (52.623007452s)
--- PASS: TestNetworkPlugins/group/false/Start (52.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-608294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-608294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sfpvt" [2bf00c55-5a15-4ee2-a1bf-6fbda4cff791] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-sfpvt" [2bf00c55-5a15-4ee2-a1bf-6fbda4cff791] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004218776s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-608294 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-608294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-608294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-mmn7l" [b38225a7-9bdf-4aa3-9754-f3b4823cc657] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-mmn7l" [b38225a7-9bdf-4aa3-9754-f3b4823cc657] Running
E0528 21:55:26.905106 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.003205046s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-608294 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0528 21:55:43.858801 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m13.336271047s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (54.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0528 21:56:41.619038 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 21:56:41.624281 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 21:56:41.635488 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 21:56:41.655732 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 21:56:41.696414 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 21:56:41.779915 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 21:56:41.940263 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 21:56:42.260371 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 21:56:42.900594 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 21:56:44.181432 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (54.95201297s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (54.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vnkzs" [5881b388-0536-4b97-acbf-10f19592de68] Running
E0528 21:56:46.742267 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004804873s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-608294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-608294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-t8g7n" [94773a22-df54-42ed-a3b5-de864ec4f6d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-t8g7n" [94773a22-df54-42ed-a3b5-de864ec4f6d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003977715s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-608294 "pgrep -a kubelet"
E0528 21:56:51.863709 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-608294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7flg7" [e5617ff7-e792-4737-a52c-b54c554d419e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7flg7" [e5617ff7-e792-4737-a52c-b54c554d419e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004791041s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-608294 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0528 21:57:02.104395 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-608294 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (58.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (58.315249528s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (58.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (94.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0528 21:58:03.546906 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 21:58:03.960215 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:03.965412 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:03.975638 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:03.995868 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:04.036106 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:04.116379 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:04.276606 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:04.597029 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:05.237551 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:06.518450 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:09.079660 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:14.200083 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 21:58:24.441115 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-608294 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m34.622608859s)
--- PASS: TestNetworkPlugins/group/bridge/Start (94.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-608294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-608294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-sc4kj" [194054ec-fd07-4bd9-99c2-da6eaa8e90a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-sc4kj" [194054ec-fd07-4bd9-99c2-da6eaa8e90a8] Running
E0528 21:58:36.072447 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004442499s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-608294 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-292036 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0528 21:58:59.498865 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/calico-608294/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-292036 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m33.478276342s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-608294 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-608294 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-b7pgb" [c655100c-fff0-43b1-8ac1-d8e533954b97] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-b7pgb" [c655100c-fff0-43b1-8ac1-d8e533954b97] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003326265s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-608294 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-608294 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (91.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-438399 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
E0528 21:59:58.044039 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 21:59:58.049267 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 21:59:58.059844 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 21:59:58.080435 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 21:59:58.120658 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 21:59:58.200915 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 21:59:58.361324 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 21:59:58.681577 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 21:59:59.322200 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 22:00:00.602995 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 22:00:00.940287 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/calico-608294/client.crt: no such file or directory
E0528 22:00:01.667657 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 22:00:03.163584 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 22:00:08.283783 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 22:00:18.523959 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 22:00:18.831920 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:18.837241 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:18.847491 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:18.867734 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:18.908010 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:18.988378 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:19.148724 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:19.469167 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:20.110147 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:21.390818 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:23.951459 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:29.071608 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:39.004807 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 22:00:39.312794 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:00:43.858856 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 22:00:47.803298 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 22:00:59.793649 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-438399 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (1m31.93747333s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (91.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-438399 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dda18ce1-6de6-4fa4-b4b2-ee449f87c0fc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dda18ce1-6de6-4fa4-b4b2-ee449f87c0fc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004023822s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-438399 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-438399 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-438399 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-438399 --alsologtostderr -v=3
E0528 22:01:19.964991 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 22:01:22.861464 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/calico-608294/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-438399 --alsologtostderr -v=3: (11.018571715s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-292036 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [601866f4-61da-4346-b9f8-3b87d57393d9] Pending
helpers_test.go:344: "busybox" [601866f4-61da-4346-b9f8-3b87d57393d9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [601866f4-61da-4346-b9f8-3b87d57393d9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.008642024s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-292036 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-438399 -n embed-certs-438399
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-438399 -n embed-certs-438399: exit status 7 (136.135923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-438399 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (290.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-438399 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-438399 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (4m50.089819446s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-438399 -n embed-certs-438399
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (290.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-292036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-292036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.317711676s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-292036 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-292036 --alsologtostderr -v=3
E0528 22:01:40.754483 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:01:41.618180 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 22:01:45.763232 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:01:45.768546 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:01:45.778752 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:01:45.798985 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:01:45.839242 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:01:45.919494 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:01:46.079992 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:01:46.400584 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:01:47.041460 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:01:48.322091 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:01:50.709741 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:01:50.715040 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:01:50.725359 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:01:50.745617 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:01:50.786007 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:01:50.866328 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:01:50.882560 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:01:51.026811 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:01:51.347211 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-292036 --alsologtostderr -v=3: (11.394609032s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-292036 -n old-k8s-version-292036
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-292036 -n old-k8s-version-292036: exit status 7 (73.176076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-292036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-r7vvs" [0ecac2d4-5769-4d61-bdd0-af22b2ea43dc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005131107s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-r7vvs" [0ecac2d4-5769-4d61-bdd0-af22b2ea43dc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004126832s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-438399 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-438399 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-438399 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-438399 -n embed-certs-438399
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-438399 -n embed-certs-438399: exit status 2 (309.816543ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-438399 -n embed-certs-438399
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-438399 -n embed-certs-438399: exit status 2 (328.637835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-438399 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-438399 -n embed-certs-438399
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-438399 -n embed-certs-438399
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-930485 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
E0528 22:06:41.618760 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 22:06:45.764109 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:06:47.054347 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
E0528 22:06:50.709521 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:07:13.447515 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:07:18.393027 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-930485 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (1m9.919750676s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-930485 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [84f50737-f269-4e65-bb42-0ce98cb958fb] Pending
helpers_test.go:344: "busybox" [84f50737-f269-4e65-bb42-0ce98cb958fb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [84f50737-f269-4e65-bb42-0ce98cb958fb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.006694277s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-930485 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-930485 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-930485 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-930485 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-930485 --alsologtostderr -v=3: (10.8301463s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7xnc2" [c029ff76-aa5c-4702-9aa3-27204fbb7dbc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004684661s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-930485 -n no-preload-930485
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-930485 -n no-preload-930485: exit status 7 (67.040921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-930485 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (269.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-930485 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-930485 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (4m29.026226365s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-930485 -n no-preload-930485
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (269.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-7xnc2" [c029ff76-aa5c-4702-9aa3-27204fbb7dbc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004673345s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-292036 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-292036 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-292036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-292036 --alsologtostderr -v=1: (1.152213712s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-292036 -n old-k8s-version-292036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-292036 -n old-k8s-version-292036: exit status 2 (447.222368ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-292036 -n old-k8s-version-292036
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-292036 -n old-k8s-version-292036: exit status 2 (447.604086ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-292036 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-292036 -n old-k8s-version-292036
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-292036 -n old-k8s-version-292036
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-641283 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
E0528 22:08:25.289188 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:08:36.073241 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 22:08:38.625588 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 22:08:39.016939 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/calico-608294/client.crt: no such file or directory
E0528 22:08:52.975205 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:09:03.212046 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-641283 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (49.394411192s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-641283 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1234789a-ab03-4463-9f89-5e2a89e0c394] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1234789a-ab03-4463-9f89-5e2a89e0c394] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004164964s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-641283 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-641283 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-641283 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-641283 --alsologtostderr -v=3
E0528 22:09:30.895453 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-641283 --alsologtostderr -v=3: (10.880683861s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-641283 -n default-k8s-diff-port-641283
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-641283 -n default-k8s-diff-port-641283: exit status 7 (77.886176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-641283 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-641283 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
E0528 22:09:58.044033 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/custom-flannel-608294/client.crt: no such file or directory
E0528 22:10:18.831408 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/false-608294/client.crt: no such file or directory
E0528 22:10:43.858631 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 22:11:30.401500 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:30.406841 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:30.417125 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:30.437463 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:30.477781 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:30.558067 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:30.718825 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:31.039203 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:31.679598 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:32.960058 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:35.520925 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:40.641068 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:11:41.618190 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 22:11:45.763106 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kindnet-608294/client.crt: no such file or directory
E0528 22:11:50.709888 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/kubenet-608294/client.crt: no such file or directory
E0528 22:11:50.882177 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
E0528 22:12:06.906149 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/addons-885631/client.crt: no such file or directory
E0528 22:12:11.362397 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-641283 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (4m26.47610021s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-641283 -n default-k8s-diff-port-641283
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-j94j9" [4b137b39-6645-48eb-907f-a9f5460c58f4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004081652s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-j94j9" [4b137b39-6645-48eb-907f-a9f5460c58f4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004351513s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-930485 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-930485 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-930485 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-930485 -n no-preload-930485
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-930485 -n no-preload-930485: exit status 2 (310.79208ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-930485 -n no-preload-930485
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-930485 -n no-preload-930485: exit status 2 (337.824296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-930485 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-930485 -n no-preload-930485
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-930485 -n no-preload-930485
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-814733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
E0528 22:13:03.960466 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/flannel-608294/client.crt: no such file or directory
E0528 22:13:04.668651 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/auto-608294/client.crt: no such file or directory
E0528 22:13:25.288791 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/enable-default-cni-608294/client.crt: no such file or directory
E0528 22:13:36.072862 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/functional-409073/client.crt: no such file or directory
E0528 22:13:38.624691 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/skaffold-378369/client.crt: no such file or directory
E0528 22:13:39.017582 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/calico-608294/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-814733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (45.289715975s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-814733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-814733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.26798155s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-814733 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-814733 --alsologtostderr -v=3: (10.910569789s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-814733 -n newest-cni-814733
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-814733 -n newest-cni-814733: exit status 7 (73.817734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-814733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-814733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-814733 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (17.274176286s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-814733 -n newest-cni-814733
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wdb8w" [d4f9491e-c9c4-4d98-9739-ad0942c219a9] Running
E0528 22:14:03.212245 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/bridge-608294/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003302133s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-wdb8w" [d4f9491e-c9c4-4d98-9739-ad0942c219a9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003346331s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-641283 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-814733 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-814733 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-814733 -n newest-cni-814733
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-814733 -n newest-cni-814733: exit status 2 (312.197053ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-814733 -n newest-cni-814733
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-814733 -n newest-cni-814733: exit status 2 (331.742144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-814733 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-814733 -n newest-cni-814733
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-814733 -n newest-cni-814733
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-641283 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-641283 --alsologtostderr -v=1
E0528 22:14:14.243108 1070309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/old-k8s-version-292036/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-641283 -n default-k8s-diff-port-641283
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-641283 -n default-k8s-diff-port-641283: exit status 2 (387.440311ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-641283 -n default-k8s-diff-port-641283
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-641283 -n default-k8s-diff-port-641283: exit status 2 (347.864667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-641283 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-641283 -n default-k8s-diff-port-641283
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-641283 -n default-k8s-diff-port-641283
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-725858 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-725858" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-725858
--- SKIP: TestDownloadOnlyKic (0.53s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-608294 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-608294" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18966-1064873/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 28 May 2024 21:40:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: offline-docker-602515
contexts:
- context:
cluster: offline-docker-602515
extensions:
- extension:
last-update: Tue, 28 May 2024 21:40:00 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: offline-docker-602515
name: offline-docker-602515
current-context: offline-docker-602515
kind: Config
preferences: {}
users:
- name: offline-docker-602515
user:
client-certificate: /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/offline-docker-602515/client.crt
client-key: /home/jenkins/minikube-integration/18966-1064873/.minikube/profiles/offline-docker-602515/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-608294

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-608294" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-608294"

                                                
                                                
----------------------- debugLogs end: cilium-608294 [took: 3.861787934s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-608294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-608294
--- SKIP: TestNetworkPlugins/group/cilium (4.02s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-501801" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-501801
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard