Test Report: Docker_Linux_docker_arm64 18925

                    
                      9bd6871c0608907332c6bb982838c8ee113ad42f:2024-05-20:34544
                    
                

Test fail (2/342)

Order failed test Duration
30 TestAddons/parallel/Ingress 35.97
309 TestStartStop/group/old-k8s-version/serial/SecondStart 376.06
x
+
TestAddons/parallel/Ingress (35.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-988376 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-988376 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-988376 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [847f6f23-3143-4e45-a42b-ecdbffc3fea9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [847f6f23-3143-4e45-a42b-ecdbffc3fea9] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004001294s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-988376 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.061276154s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-988376 addons disable ingress --alsologtostderr -v=1: (7.628078336s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-988376
helpers_test.go:235: (dbg) docker inspect addons-988376:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a53c2921353dc35cb84716cbfc971c3e567f72c97168ac3255779c7762a24bf5",
	        "Created": "2024-05-20T10:20:45.61196877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8627,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-20T10:20:45.936120846Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:56620e18f2c2c9a0448fc43c42f840334bd2baea497ff8deae66477dd0dbfecf",
	        "ResolvConfPath": "/var/lib/docker/containers/a53c2921353dc35cb84716cbfc971c3e567f72c97168ac3255779c7762a24bf5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a53c2921353dc35cb84716cbfc971c3e567f72c97168ac3255779c7762a24bf5/hostname",
	        "HostsPath": "/var/lib/docker/containers/a53c2921353dc35cb84716cbfc971c3e567f72c97168ac3255779c7762a24bf5/hosts",
	        "LogPath": "/var/lib/docker/containers/a53c2921353dc35cb84716cbfc971c3e567f72c97168ac3255779c7762a24bf5/a53c2921353dc35cb84716cbfc971c3e567f72c97168ac3255779c7762a24bf5-json.log",
	        "Name": "/addons-988376",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-988376:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-988376",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a9b531179d35fcc3537ce8b4954f490e7f369a6d4c60f4efbc2ef4f57876231f-init/diff:/var/lib/docker/overlay2/5223768ff4f8d0789b9175fc3fdf07e45fc06ea6efae7d6f7831e460b38e1113/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a9b531179d35fcc3537ce8b4954f490e7f369a6d4c60f4efbc2ef4f57876231f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a9b531179d35fcc3537ce8b4954f490e7f369a6d4c60f4efbc2ef4f57876231f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a9b531179d35fcc3537ce8b4954f490e7f369a6d4c60f4efbc2ef4f57876231f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-988376",
	                "Source": "/var/lib/docker/volumes/addons-988376/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-988376",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-988376",
	                "name.minikube.sigs.k8s.io": "addons-988376",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f5061b9b9bfd28375b7aeaf438acaca28c725b0e20347994b87166d26db34061",
	            "SandboxKey": "/var/run/docker/netns/f5061b9b9bfd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-988376": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "6cd728afe827d93cc2c38aab0c204b4f4db5011364dec32f3d648580452b935b",
	                    "EndpointID": "2c9972e4da0e8366fd0b6fab00028680e364e8b64993ad78c2e20bc404885ffa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-988376",
	                        "a53c2921353d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-988376 -n addons-988376
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-572511                                                                     | download-only-572511   | jenkins | v1.33.1 | 20 May 24 10:20 UTC | 20 May 24 10:20 UTC |
	| delete  | -p download-only-479514                                                                     | download-only-479514   | jenkins | v1.33.1 | 20 May 24 10:20 UTC | 20 May 24 10:20 UTC |
	| delete  | -p download-only-572511                                                                     | download-only-572511   | jenkins | v1.33.1 | 20 May 24 10:20 UTC | 20 May 24 10:20 UTC |
	| start   | --download-only -p                                                                          | download-docker-508495 | jenkins | v1.33.1 | 20 May 24 10:20 UTC |                     |
	|         | download-docker-508495                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-508495                                                                   | download-docker-508495 | jenkins | v1.33.1 | 20 May 24 10:20 UTC | 20 May 24 10:20 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-981870   | jenkins | v1.33.1 | 20 May 24 10:20 UTC |                     |
	|         | binary-mirror-981870                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34277                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-981870                                                                     | binary-mirror-981870   | jenkins | v1.33.1 | 20 May 24 10:20 UTC | 20 May 24 10:20 UTC |
	| addons  | enable dashboard -p                                                                         | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:20 UTC |                     |
	|         | addons-988376                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:20 UTC |                     |
	|         | addons-988376                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-988376 --wait=true                                                                | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:20 UTC | 20 May 24 10:22 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=docker                                                                 |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-988376 ip                                                                            | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:23 UTC | 20 May 24 10:23 UTC |
	| addons  | addons-988376 addons disable                                                                | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:23 UTC | 20 May 24 10:23 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:23 UTC | 20 May 24 10:23 UTC |
	|         | -p addons-988376                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-988376 ssh cat                                                                       | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:23 UTC | 20 May 24 10:23 UTC |
	|         | /opt/local-path-provisioner/pvc-1d3d24c8-8add-46c7-93b5-a621b104aabf_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-988376 addons disable                                                                | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:23 UTC | 20 May 24 10:24 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-988376 addons                                                                        | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:23 UTC | 20 May 24 10:23 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-988376 addons                                                                        | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:23 UTC | 20 May 24 10:23 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:23 UTC | 20 May 24 10:23 UTC |
	|         | addons-988376                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:23 UTC | 20 May 24 10:23 UTC |
	|         | -p addons-988376                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:24 UTC |
	|         | addons-988376                                                                               |                        |         |         |                     |                     |
	| addons  | addons-988376 addons                                                                        | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:24 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-988376 ssh curl -s                                                                   | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:24 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-988376 ip                                                                            | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:24 UTC |
	| addons  | addons-988376 addons disable                                                                | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:24 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-988376 addons disable                                                                | addons-988376          | jenkins | v1.33.1 | 20 May 24 10:24 UTC | 20 May 24 10:24 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:20:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:20:21.038288    8148 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:20:21.038436    8148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:20:21.038445    8148 out.go:304] Setting ErrFile to fd 2...
	I0520 10:20:21.038451    8148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:20:21.038729    8148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
	I0520 10:20:21.039148    8148 out.go:298] Setting JSON to false
	I0520 10:20:21.039846    8148 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":141,"bootTime":1716200280,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1061-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0520 10:20:21.039909    8148 start.go:139] virtualization:  
	I0520 10:20:21.042071    8148 out.go:177] * [addons-988376] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 10:20:21.044672    8148 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:20:21.046211    8148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:20:21.044743    8148 notify.go:220] Checking for updates...
	I0520 10:20:21.049887    8148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	I0520 10:20:21.051883    8148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	I0520 10:20:21.053705    8148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 10:20:21.055428    8148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:20:21.057274    8148 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:20:21.078354    8148 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 10:20:21.078475    8148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:20:21.143750    8148 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-05-20 10:20:21.134085922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:20:21.143860    8148 docker.go:295] overlay module found
	I0520 10:20:21.147411    8148 out.go:177] * Using the docker driver based on user configuration
	I0520 10:20:21.149567    8148 start.go:297] selected driver: docker
	I0520 10:20:21.149585    8148 start.go:901] validating driver "docker" against <nil>
	I0520 10:20:21.149609    8148 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:20:21.150276    8148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:20:21.201449    8148 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-05-20 10:20:21.192636548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:20:21.201610    8148 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:20:21.201886    8148 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:20:21.203976    8148 out.go:177] * Using Docker driver with root privileges
	I0520 10:20:21.205946    8148 cni.go:84] Creating CNI manager for ""
	I0520 10:20:21.205970    8148 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 10:20:21.205978    8148 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 10:20:21.206066    8148 start.go:340] cluster config:
	{Name:addons-988376 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-988376 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:20:21.208233    8148 out.go:177] * Starting "addons-988376" primary control-plane node in "addons-988376" cluster
	I0520 10:20:21.210133    8148 cache.go:121] Beginning downloading kic base image for docker with docker
	I0520 10:20:21.212051    8148 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0520 10:20:21.213724    8148 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 10:20:21.213771    8148 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 10:20:21.213784    8148 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0520 10:20:21.213796    8148 cache.go:56] Caching tarball of preloaded images
	I0520 10:20:21.213951    8148 preload.go:173] Found /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 10:20:21.213959    8148 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 10:20:21.214415    8148 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/config.json ...
	I0520 10:20:21.214447    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/config.json: {Name:mke0d92ef1eeca33ec89aa3c03e11de49e02faad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:21.227392    8148 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0520 10:20:21.227514    8148 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory
	I0520 10:20:21.227533    8148 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory, skipping pull
	I0520 10:20:21.227537    8148 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in cache, skipping pull
	I0520 10:20:21.227546    8148 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a as a tarball
	I0520 10:20:21.227551    8148 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a from local cache
	I0520 10:20:37.813739    8148 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a from cached tarball
	I0520 10:20:37.813786    8148 cache.go:194] Successfully downloaded all kic artifacts
	I0520 10:20:37.813828    8148 start.go:360] acquireMachinesLock for addons-988376: {Name:mkfb9cffc40bdbdb1bd0c031d321628a1483183a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 10:20:37.814001    8148 start.go:364] duration metric: took 144.989µs to acquireMachinesLock for "addons-988376"
	I0520 10:20:37.814039    8148 start.go:93] Provisioning new machine with config: &{Name:addons-988376 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-988376 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 10:20:37.814130    8148 start.go:125] createHost starting for "" (driver="docker")
	I0520 10:20:37.816231    8148 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0520 10:20:37.816466    8148 start.go:159] libmachine.API.Create for "addons-988376" (driver="docker")
	I0520 10:20:37.816498    8148 client.go:168] LocalClient.Create starting
	I0520 10:20:37.816601    8148 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem
	I0520 10:20:38.236968    8148 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/cert.pem
	I0520 10:20:39.355360    8148 cli_runner.go:164] Run: docker network inspect addons-988376 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0520 10:20:39.370520    8148 cli_runner.go:211] docker network inspect addons-988376 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0520 10:20:39.370609    8148 network_create.go:281] running [docker network inspect addons-988376] to gather additional debugging logs...
	I0520 10:20:39.370629    8148 cli_runner.go:164] Run: docker network inspect addons-988376
	W0520 10:20:39.383898    8148 cli_runner.go:211] docker network inspect addons-988376 returned with exit code 1
	I0520 10:20:39.383930    8148 network_create.go:284] error running [docker network inspect addons-988376]: docker network inspect addons-988376: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-988376 not found
	I0520 10:20:39.383944    8148 network_create.go:286] output of [docker network inspect addons-988376]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-988376 not found
	
	** /stderr **
	I0520 10:20:39.384042    8148 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0520 10:20:39.398134    8148 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400172be90}
	I0520 10:20:39.398177    8148 network_create.go:124] attempt to create docker network addons-988376 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0520 10:20:39.398233    8148 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-988376 addons-988376
	I0520 10:20:39.455417    8148 network_create.go:108] docker network addons-988376 192.168.49.0/24 created
	I0520 10:20:39.455456    8148 kic.go:121] calculated static IP "192.168.49.2" for the "addons-988376" container
	I0520 10:20:39.455528    8148 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0520 10:20:39.468688    8148 cli_runner.go:164] Run: docker volume create addons-988376 --label name.minikube.sigs.k8s.io=addons-988376 --label created_by.minikube.sigs.k8s.io=true
	I0520 10:20:39.485123    8148 oci.go:103] Successfully created a docker volume addons-988376
	I0520 10:20:39.485221    8148 cli_runner.go:164] Run: docker run --rm --name addons-988376-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-988376 --entrypoint /usr/bin/test -v addons-988376:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0520 10:20:41.574477    8148 cli_runner.go:217] Completed: docker run --rm --name addons-988376-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-988376 --entrypoint /usr/bin/test -v addons-988376:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib: (2.089211581s)
	I0520 10:20:41.574513    8148 oci.go:107] Successfully prepared a docker volume addons-988376
	I0520 10:20:41.574536    8148 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 10:20:41.574555    8148 kic.go:194] Starting extracting preloaded images to volume ...
	I0520 10:20:41.574667    8148 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-988376:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0520 10:20:45.546538    8148 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-988376:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (3.971827434s)
	I0520 10:20:45.546565    8148 kic.go:203] duration metric: took 3.972007419s to extract preloaded images to volume ...
	W0520 10:20:45.546710    8148 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0520 10:20:45.546814    8148 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0520 10:20:45.598087    8148 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-988376 --name addons-988376 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-988376 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-988376 --network addons-988376 --ip 192.168.49.2 --volume addons-988376:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0520 10:20:45.946866    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Running}}
	I0520 10:20:45.970788    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:20:45.988238    8148 cli_runner.go:164] Run: docker exec addons-988376 stat /var/lib/dpkg/alternatives/iptables
	I0520 10:20:46.058375    8148 oci.go:144] the created container "addons-988376" has a running status.
	I0520 10:20:46.058409    8148 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa...
	I0520 10:20:46.487783    8148 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0520 10:20:46.515036    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:20:46.540493    8148 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0520 10:20:46.540519    8148 kic_runner.go:114] Args: [docker exec --privileged addons-988376 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0520 10:20:46.596370    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:20:46.614174    8148 machine.go:94] provisionDockerMachine start ...
	I0520 10:20:46.614277    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:20:46.639784    8148 main.go:141] libmachine: Using SSH client type: native
	I0520 10:20:46.640044    8148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0520 10:20:46.640059    8148 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 10:20:46.792524    8148 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-988376
	
	I0520 10:20:46.792549    8148 ubuntu.go:169] provisioning hostname "addons-988376"
	I0520 10:20:46.792616    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:20:46.814755    8148 main.go:141] libmachine: Using SSH client type: native
	I0520 10:20:46.814994    8148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0520 10:20:46.815010    8148 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-988376 && echo "addons-988376" | sudo tee /etc/hostname
	I0520 10:20:46.957409    8148 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-988376
	
	I0520 10:20:46.957491    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:20:46.974016    8148 main.go:141] libmachine: Using SSH client type: native
	I0520 10:20:46.974259    8148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0520 10:20:46.974285    8148 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-988376' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-988376/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-988376' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 10:20:47.101232    8148 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 10:20:47.101296    8148 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18925-2151/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-2151/.minikube}
	I0520 10:20:47.101329    8148 ubuntu.go:177] setting up certificates
	I0520 10:20:47.101340    8148 provision.go:84] configureAuth start
	I0520 10:20:47.101398    8148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-988376
	I0520 10:20:47.117336    8148 provision.go:143] copyHostCerts
	I0520 10:20:47.117424    8148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-2151/.minikube/ca.pem (1078 bytes)
	I0520 10:20:47.117599    8148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-2151/.minikube/cert.pem (1123 bytes)
	I0520 10:20:47.117672    8148 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-2151/.minikube/key.pem (1675 bytes)
	I0520 10:20:47.117748    8148 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-2151/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca-key.pem org=jenkins.addons-988376 san=[127.0.0.1 192.168.49.2 addons-988376 localhost minikube]
	I0520 10:20:47.501706    8148 provision.go:177] copyRemoteCerts
	I0520 10:20:47.501782    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 10:20:47.501822    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:20:47.517797    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:20:47.609811    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 10:20:47.634082    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0520 10:20:47.657131    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 10:20:47.680395    8148 provision.go:87] duration metric: took 579.040997ms to configureAuth
	I0520 10:20:47.680427    8148 ubuntu.go:193] setting minikube options for container-runtime
	I0520 10:20:47.680611    8148 config.go:182] Loaded profile config "addons-988376": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 10:20:47.680672    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:20:47.696100    8148 main.go:141] libmachine: Using SSH client type: native
	I0520 10:20:47.696378    8148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0520 10:20:47.696391    8148 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 10:20:47.821045    8148 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0520 10:20:47.821080    8148 ubuntu.go:71] root file system type: overlay
	I0520 10:20:47.821182    8148 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 10:20:47.821245    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:20:47.837261    8148 main.go:141] libmachine: Using SSH client type: native
	I0520 10:20:47.837504    8148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0520 10:20:47.837584    8148 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 10:20:47.972462    8148 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 10:20:47.972554    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:20:47.993604    8148 main.go:141] libmachine: Using SSH client type: native
	I0520 10:20:47.993900    8148 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0520 10:20:47.993923    8148 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 10:20:48.735595    8148 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-20 10:20:47.967102568 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0520 10:20:48.735691    8148 machine.go:97] duration metric: took 2.121496058s to provisionDockerMachine
	I0520 10:20:48.735743    8148 client.go:171] duration metric: took 10.919233986s to LocalClient.Create
	I0520 10:20:48.735788    8148 start.go:167] duration metric: took 10.919323116s to libmachine.API.Create "addons-988376"
	I0520 10:20:48.735810    8148 start.go:293] postStartSetup for "addons-988376" (driver="docker")
	I0520 10:20:48.735852    8148 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 10:20:48.735939    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 10:20:48.736007    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:20:48.752535    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:20:48.841712    8148 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 10:20:48.844658    8148 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0520 10:20:48.844692    8148 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0520 10:20:48.844703    8148 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0520 10:20:48.844712    8148 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0520 10:20:48.844727    8148 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-2151/.minikube/addons for local assets ...
	I0520 10:20:48.844804    8148 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-2151/.minikube/files for local assets ...
	I0520 10:20:48.844829    8148 start.go:296] duration metric: took 108.982803ms for postStartSetup
	I0520 10:20:48.845158    8148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-988376
	I0520 10:20:48.860624    8148 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/config.json ...
	I0520 10:20:48.860909    8148 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:20:48.860960    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:20:48.876686    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:20:48.961587    8148 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0520 10:20:48.965856    8148 start.go:128] duration metric: took 11.151711112s to createHost
	I0520 10:20:48.965895    8148 start.go:83] releasing machines lock for "addons-988376", held for 11.151875343s
	I0520 10:20:48.965969    8148 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-988376
	I0520 10:20:48.981640    8148 ssh_runner.go:195] Run: cat /version.json
	I0520 10:20:48.981694    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:20:48.981936    8148 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 10:20:48.981976    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:20:48.996526    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:20:49.009208    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:20:49.092434    8148 ssh_runner.go:195] Run: systemctl --version
	I0520 10:20:49.207401    8148 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 10:20:49.211650    8148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0520 10:20:49.237014    8148 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0520 10:20:49.237106    8148 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 10:20:49.270759    8148 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0520 10:20:49.270803    8148 start.go:494] detecting cgroup driver to use...
	I0520 10:20:49.270836    8148 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0520 10:20:49.270959    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:20:49.286633    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 10:20:49.296391    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 10:20:49.306320    8148 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 10:20:49.306388    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 10:20:49.315746    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 10:20:49.325045    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 10:20:49.334439    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 10:20:49.344092    8148 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 10:20:49.357379    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 10:20:49.367480    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 10:20:49.376966    8148 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 10:20:49.386625    8148 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 10:20:49.395144    8148 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 10:20:49.403070    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:20:49.488169    8148 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 10:20:49.579754    8148 start.go:494] detecting cgroup driver to use...
	I0520 10:20:49.579799    8148 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0520 10:20:49.579849    8148 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 10:20:49.602428    8148 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0520 10:20:49.602492    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 10:20:49.616887    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 10:20:49.636844    8148 ssh_runner.go:195] Run: which cri-dockerd
	I0520 10:20:49.640274    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 10:20:49.650204    8148 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 10:20:49.668624    8148 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 10:20:49.774756    8148 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 10:20:49.868463    8148 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 10:20:49.868626    8148 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 10:20:49.890616    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:20:49.976807    8148 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 10:20:50.239725    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 10:20:50.251530    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 10:20:50.263776    8148 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 10:20:50.349782    8148 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 10:20:50.454021    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:20:50.537820    8148 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 10:20:50.550999    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 10:20:50.561427    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:20:50.666733    8148 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 10:20:50.734167    8148 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 10:20:50.734314    8148 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 10:20:50.738519    8148 start.go:562] Will wait 60s for crictl version
	I0520 10:20:50.738635    8148 ssh_runner.go:195] Run: which crictl
	I0520 10:20:50.742533    8148 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 10:20:50.783529    8148 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0520 10:20:50.783655    8148 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 10:20:50.804155    8148 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 10:20:50.830935    8148 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0520 10:20:50.831026    8148 cli_runner.go:164] Run: docker network inspect addons-988376 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0520 10:20:50.844043    8148 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0520 10:20:50.847482    8148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:20:50.857797    8148 kubeadm.go:877] updating cluster {Name:addons-988376 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-988376 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 10:20:50.857911    8148 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 10:20:50.857970    8148 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 10:20:50.874109    8148 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 10:20:50.874129    8148 docker.go:615] Images already preloaded, skipping extraction
	I0520 10:20:50.874191    8148 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 10:20:50.889711    8148 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0520 10:20:50.889739    8148 cache_images.go:84] Images are preloaded, skipping loading
	I0520 10:20:50.889757    8148 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0520 10:20:50.889850    8148 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-988376 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-988376 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 10:20:50.889920    8148 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 10:20:50.937280    8148 cni.go:84] Creating CNI manager for ""
	I0520 10:20:50.937304    8148 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 10:20:50.937315    8148 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 10:20:50.937334    8148 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-988376 NodeName:addons-988376 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0520 10:20:50.937484    8148 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-988376"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 10:20:50.937553    8148 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0520 10:20:50.945881    8148 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 10:20:50.945959    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 10:20:50.954069    8148 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0520 10:20:50.971099    8148 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 10:20:50.987976    8148 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0520 10:20:51.005910    8148 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0520 10:20:51.010266    8148 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 10:20:51.021325    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:20:51.105032    8148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:20:51.121129    8148 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376 for IP: 192.168.49.2
	I0520 10:20:51.121201    8148 certs.go:194] generating shared ca certs ...
	I0520 10:20:51.121232    8148 certs.go:226] acquiring lock for ca certs: {Name:mka753a63b3bd30b9859f448573f70a0fd066da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:51.121407    8148 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-2151/.minikube/ca.key
	I0520 10:20:51.986407    8148 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-2151/.minikube/ca.crt ...
	I0520 10:20:51.986437    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/ca.crt: {Name:mk2c551da1d0dae975f980ec6a710246cf2df9b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:51.986662    8148 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-2151/.minikube/ca.key ...
	I0520 10:20:51.986678    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/ca.key: {Name:mkd2ad506859fe4473b5759953e3950ef9a85f22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:51.986785    8148 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-2151/.minikube/proxy-client-ca.key
	I0520 10:20:52.723966    8148 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-2151/.minikube/proxy-client-ca.crt ...
	I0520 10:20:52.724001    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/proxy-client-ca.crt: {Name:mkf6abedde6dfeef3de27fafe7ba34b6ddadade3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:52.724196    8148 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-2151/.minikube/proxy-client-ca.key ...
	I0520 10:20:52.724210    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/proxy-client-ca.key: {Name:mk5ca67869057a0ba6c26d6636a91c708e08cde4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:52.724296    8148 certs.go:256] generating profile certs ...
	I0520 10:20:52.724385    8148 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.key
	I0520 10:20:52.724407    8148 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt with IP's: []
	I0520 10:20:53.068764    8148 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt ...
	I0520 10:20:53.068796    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: {Name:mk8e27811dd7625dc5d6c356daae27ab6ea779a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:53.068987    8148 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.key ...
	I0520 10:20:53.069000    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.key: {Name:mkf886aa5c4798f0f6ddd4eaacdcf5b00fd0bc6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:53.069125    8148 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.key.a75a2d1a
	I0520 10:20:53.069150    8148 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.crt.a75a2d1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0520 10:20:53.246369    8148 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.crt.a75a2d1a ...
	I0520 10:20:53.246398    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.crt.a75a2d1a: {Name:mk7689e58cde66888b7df6659fef753cb06a024b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:53.246592    8148 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.key.a75a2d1a ...
	I0520 10:20:53.246608    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.key.a75a2d1a: {Name:mk4bc1a4775778475b58a2be34fd805251f2e80f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:53.246695    8148 certs.go:381] copying /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.crt.a75a2d1a -> /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.crt
	I0520 10:20:53.246774    8148 certs.go:385] copying /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.key.a75a2d1a -> /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.key
	I0520 10:20:53.246826    8148 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/proxy-client.key
	I0520 10:20:53.246846    8148 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/proxy-client.crt with IP's: []
	I0520 10:20:53.693665    8148 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/proxy-client.crt ...
	I0520 10:20:53.693695    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/proxy-client.crt: {Name:mkbd6ecbd6bda149aea0efb87220b0beda9eacb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:53.693894    8148 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/proxy-client.key ...
	I0520 10:20:53.693909    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/proxy-client.key: {Name:mk5c5e10a41763628a27938bdb4658e924475e20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:20:53.694096    8148 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 10:20:53.694136    8148 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem (1078 bytes)
	I0520 10:20:53.694158    8148 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/cert.pem (1123 bytes)
	I0520 10:20:53.694188    8148 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/key.pem (1675 bytes)
	I0520 10:20:53.694766    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 10:20:53.719151    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 10:20:53.742431    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 10:20:53.764952    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 10:20:53.787650    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0520 10:20:53.810384    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0520 10:20:53.832450    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 10:20:53.856272    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 10:20:53.879297    8148 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 10:20:53.902064    8148 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 10:20:53.918491    8148 ssh_runner.go:195] Run: openssl version
	I0520 10:20:53.923718    8148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 10:20:53.933241    8148 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:20:53.936745    8148 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:20:53.936804    8148 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 10:20:53.943895    8148 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 10:20:53.952915    8148 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 10:20:53.956430    8148 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0520 10:20:53.956515    8148 kubeadm.go:391] StartCluster: {Name:addons-988376 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-988376 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:20:53.956670    8148 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 10:20:53.972687    8148 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0520 10:20:53.981324    8148 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0520 10:20:53.990175    8148 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0520 10:20:53.990288    8148 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0520 10:20:53.998785    8148 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0520 10:20:53.998805    8148 kubeadm.go:156] found existing configuration files:
	
	I0520 10:20:53.998884    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0520 10:20:54.009118    8148 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0520 10:20:54.009248    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0520 10:20:54.018191    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0520 10:20:54.027459    8148 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0520 10:20:54.027534    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0520 10:20:54.036058    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0520 10:20:54.045018    8148 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0520 10:20:54.045126    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0520 10:20:54.053795    8148 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0520 10:20:54.062718    8148 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0520 10:20:54.062784    8148 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0520 10:20:54.071203    8148 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0520 10:20:54.123228    8148 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0520 10:20:54.123297    8148 kubeadm.go:309] [preflight] Running pre-flight checks
	I0520 10:20:54.176736    8148 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0520 10:20:54.176895    8148 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1061-aws
	I0520 10:20:54.176980    8148 kubeadm.go:309] OS: Linux
	I0520 10:20:54.177090    8148 kubeadm.go:309] CGROUPS_CPU: enabled
	I0520 10:20:54.177157    8148 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0520 10:20:54.177210    8148 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0520 10:20:54.177278    8148 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0520 10:20:54.177373    8148 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0520 10:20:54.177440    8148 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0520 10:20:54.177493    8148 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0520 10:20:54.177587    8148 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0520 10:20:54.177670    8148 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0520 10:20:54.240042    8148 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0520 10:20:54.240202    8148 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0520 10:20:54.240302    8148 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0520 10:20:54.472303    8148 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0520 10:20:54.476340    8148 out.go:204]   - Generating certificates and keys ...
	I0520 10:20:54.476483    8148 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0520 10:20:54.476571    8148 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0520 10:20:54.970126    8148 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0520 10:20:57.327305    8148 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0520 10:20:58.565853    8148 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0520 10:20:59.157629    8148 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0520 10:20:59.803632    8148 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0520 10:20:59.803959    8148 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-988376 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0520 10:20:59.970604    8148 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0520 10:20:59.970888    8148 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-988376 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0520 10:21:00.824253    8148 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0520 10:21:02.189761    8148 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0520 10:21:02.750910    8148 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0520 10:21:02.751320    8148 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0520 10:21:03.278355    8148 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0520 10:21:03.747305    8148 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0520 10:21:04.186907    8148 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0520 10:21:05.383122    8148 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0520 10:21:06.284607    8148 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0520 10:21:06.285221    8148 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0520 10:21:06.288063    8148 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0520 10:21:06.290508    8148 out.go:204]   - Booting up control plane ...
	I0520 10:21:06.290602    8148 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0520 10:21:06.290678    8148 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0520 10:21:06.290742    8148 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0520 10:21:06.307008    8148 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0520 10:21:06.312134    8148 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0520 10:21:06.312379    8148 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0520 10:21:06.412982    8148 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0520 10:21:06.413079    8148 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0520 10:21:07.914409    8148 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.501789917s
	I0520 10:21:07.914493    8148 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0520 10:21:14.419191    8148 kubeadm.go:309] [api-check] The API server is healthy after 6.504753261s
	I0520 10:21:14.439523    8148 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0520 10:21:14.457107    8148 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0520 10:21:14.486259    8148 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0520 10:21:14.486467    8148 kubeadm.go:309] [mark-control-plane] Marking the node addons-988376 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0520 10:21:14.499637    8148 kubeadm.go:309] [bootstrap-token] Using token: euppx3.lxnweorw3k80dw9f
	I0520 10:21:14.501829    8148 out.go:204]   - Configuring RBAC rules ...
	I0520 10:21:14.501976    8148 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0520 10:21:14.506226    8148 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0520 10:21:14.513787    8148 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0520 10:21:14.517479    8148 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0520 10:21:14.522845    8148 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0520 10:21:14.527808    8148 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0520 10:21:14.825187    8148 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0520 10:21:15.255754    8148 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0520 10:21:15.825887    8148 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0520 10:21:15.827806    8148 kubeadm.go:309] 
	I0520 10:21:15.827887    8148 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0520 10:21:15.827894    8148 kubeadm.go:309] 
	I0520 10:21:15.827968    8148 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0520 10:21:15.827973    8148 kubeadm.go:309] 
	I0520 10:21:15.827997    8148 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0520 10:21:15.828399    8148 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0520 10:21:15.828462    8148 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0520 10:21:15.828468    8148 kubeadm.go:309] 
	I0520 10:21:15.828520    8148 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0520 10:21:15.828527    8148 kubeadm.go:309] 
	I0520 10:21:15.828573    8148 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0520 10:21:15.828577    8148 kubeadm.go:309] 
	I0520 10:21:15.828630    8148 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0520 10:21:15.828707    8148 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0520 10:21:15.828783    8148 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0520 10:21:15.828787    8148 kubeadm.go:309] 
	I0520 10:21:15.829164    8148 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0520 10:21:15.829273    8148 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0520 10:21:15.829298    8148 kubeadm.go:309] 
	I0520 10:21:15.829532    8148 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token euppx3.lxnweorw3k80dw9f \
	I0520 10:21:15.829654    8148 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7bd08ea78a9524592710e3a5f7a43aa17f91ec5c2f5b7a9457dd23ee173b7c15 \
	I0520 10:21:15.829853    8148 kubeadm.go:309] 	--control-plane 
	I0520 10:21:15.829862    8148 kubeadm.go:309] 
	I0520 10:21:15.830122    8148 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0520 10:21:15.830131    8148 kubeadm.go:309] 
	I0520 10:21:15.830381    8148 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token euppx3.lxnweorw3k80dw9f \
	I0520 10:21:15.830652    8148 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:7bd08ea78a9524592710e3a5f7a43aa17f91ec5c2f5b7a9457dd23ee173b7c15 
	I0520 10:21:15.834517    8148 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1061-aws\n", err: exit status 1
	I0520 10:21:15.834662    8148 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0520 10:21:15.834690    8148 cni.go:84] Creating CNI manager for ""
	I0520 10:21:15.834705    8148 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 10:21:15.838664    8148 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0520 10:21:15.840347    8148 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0520 10:21:15.848764    8148 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0520 10:21:15.865810    8148 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0520 10:21:15.865923    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-988376 minikube.k8s.io/updated_at=2024_05_20T10_21_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91 minikube.k8s.io/name=addons-988376 minikube.k8s.io/primary=true
	I0520 10:21:15.865924    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:15.988853    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:15.988912    8148 ops.go:34] apiserver oom_adj: -16
	I0520 10:21:16.489363    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:16.989207    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:17.488932    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:17.988973    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:18.488978    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:18.989745    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:19.489437    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:19.989524    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:20.489200    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:20.989751    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:21.489861    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:21.989238    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:22.489035    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:22.988976    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:23.489142    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:23.988903    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:24.488976    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:24.989932    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:25.488977    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:25.989309    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:26.488927    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:26.989245    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:27.489247    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:27.989033    8148 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0520 10:21:28.090323    8148 kubeadm.go:1107] duration metric: took 12.224474229s to wait for elevateKubeSystemPrivileges
	W0520 10:21:28.090359    8148 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0520 10:21:28.090367    8148 kubeadm.go:393] duration metric: took 34.133857218s to StartCluster
	I0520 10:21:28.090382    8148 settings.go:142] acquiring lock: {Name:mkf178671fce68e287b32051308c404994baee58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:21:28.090506    8148 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-2151/kubeconfig
	I0520 10:21:28.090887    8148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/kubeconfig: {Name:mk3d714476b7ca0e67bf2a31cd3b93dbb70011b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 10:21:28.091088    8148 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 10:21:28.093897    8148 out.go:177] * Verifying Kubernetes components...
	I0520 10:21:28.091229    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0520 10:21:28.091401    8148 config.go:182] Loaded profile config "addons-988376": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 10:21:28.091411    8148 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0520 10:21:28.096345    8148 addons.go:69] Setting yakd=true in profile "addons-988376"
	I0520 10:21:28.096362    8148 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 10:21:28.096378    8148 addons.go:234] Setting addon yakd=true in "addons-988376"
	I0520 10:21:28.096407    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.096443    8148 addons.go:69] Setting ingress-dns=true in profile "addons-988376"
	I0520 10:21:28.096464    8148 addons.go:234] Setting addon ingress-dns=true in "addons-988376"
	I0520 10:21:28.096491    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.096969    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.097015    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.099379    8148 addons.go:69] Setting inspektor-gadget=true in profile "addons-988376"
	I0520 10:21:28.099417    8148 addons.go:234] Setting addon inspektor-gadget=true in "addons-988376"
	I0520 10:21:28.099474    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.099914    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.101785    8148 addons.go:69] Setting cloud-spanner=true in profile "addons-988376"
	I0520 10:21:28.101872    8148 addons.go:234] Setting addon cloud-spanner=true in "addons-988376"
	I0520 10:21:28.101958    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.104091    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.104169    8148 addons.go:69] Setting metrics-server=true in profile "addons-988376"
	I0520 10:21:28.104199    8148 addons.go:234] Setting addon metrics-server=true in "addons-988376"
	I0520 10:21:28.104243    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.104676    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.117136    8148 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-988376"
	I0520 10:21:28.117215    8148 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-988376"
	I0520 10:21:28.117268    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.117714    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.131967    8148 addons.go:69] Setting registry=true in profile "addons-988376"
	I0520 10:21:28.132029    8148 addons.go:234] Setting addon registry=true in "addons-988376"
	I0520 10:21:28.132067    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.132499    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.103572    8148 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-988376"
	I0520 10:21:28.103585    8148 addons.go:69] Setting default-storageclass=true in profile "addons-988376"
	I0520 10:21:28.135837    8148 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-988376"
	I0520 10:21:28.103589    8148 addons.go:69] Setting gcp-auth=true in profile "addons-988376"
	I0520 10:21:28.135908    8148 mustload.go:65] Loading cluster: addons-988376
	I0520 10:21:28.103595    8148 addons.go:69] Setting ingress=true in profile "addons-988376"
	I0520 10:21:28.135961    8148 addons.go:234] Setting addon ingress=true in "addons-988376"
	I0520 10:21:28.135995    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.139097    8148 addons.go:69] Setting storage-provisioner=true in profile "addons-988376"
	I0520 10:21:28.139207    8148 addons.go:234] Setting addon storage-provisioner=true in "addons-988376"
	I0520 10:21:28.139337    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.140082    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.166509    8148 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-988376"
	I0520 10:21:28.166557    8148 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-988376"
	I0520 10:21:28.166992    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.172960    8148 config.go:182] Loaded profile config "addons-988376": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 10:21:28.174928    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.175423    8148 addons.go:69] Setting volumesnapshots=true in profile "addons-988376"
	I0520 10:21:28.175460    8148 addons.go:234] Setting addon volumesnapshots=true in "addons-988376"
	I0520 10:21:28.175498    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.175859    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.175179    8148 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-988376"
	I0520 10:21:28.190247    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.190704    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.218094    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.226216    8148 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0520 10:21:28.235412    8148 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 10:21:28.235436    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0520 10:21:28.235504    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.279120    8148 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0520 10:21:28.293751    8148 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0520 10:21:28.304004    8148 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0520 10:21:28.304021    8148 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0520 10:21:28.304077    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.304282    8148 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 10:21:28.304304    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0520 10:21:28.304371    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.323945    8148 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 10:21:28.237320    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.303867    8148 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-988376"
	I0520 10:21:28.318871    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.326634    8148 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.0
	I0520 10:21:28.326744    8148 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:21:28.326749    8148 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0520 10:21:28.331950    8148 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0520 10:21:28.331971    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0520 10:21:28.332040    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.350420    8148 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0520 10:21:28.350448    8148 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0520 10:21:28.350518    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.328954    8148 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0520 10:21:28.328962    8148 out.go:177]   - Using image docker.io/registry:2.8.3
	I0520 10:21:28.328999    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.352896    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 10:21:28.353160    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.373199    8148 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 10:21:28.373219    8148 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 10:21:28.373283    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.381578    8148 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0520 10:21:28.389704    8148 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0520 10:21:28.389733    8148 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0520 10:21:28.389815    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.414687    8148 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0520 10:21:28.418962    8148 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0520 10:21:28.418984    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0520 10:21:28.419051    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.444627    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.463024    8148 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0520 10:21:28.453652    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.454079    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.477754    8148 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:21:28.472232    8148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0520 10:21:28.486720    8148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0520 10:21:28.486394    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.489107    8148 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:21:28.495598    8148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0520 10:21:28.498408    8148 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 10:21:28.498429    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0520 10:21:28.498492    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.501342    8148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0520 10:21:28.507503    8148 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0520 10:21:28.509781    8148 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0520 10:21:28.517191    8148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0520 10:21:28.511179    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.511209    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.519042    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.522583    8148 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0520 10:21:28.525186    8148 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0520 10:21:28.525227    8148 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0520 10:21:28.525302    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.555015    8148 addons.go:234] Setting addon default-storageclass=true in "addons-988376"
	I0520 10:21:28.555060    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:28.555460    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:28.592990    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.597322    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.624668    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.625560    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.638146    8148 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0520 10:21:28.643244    8148 out.go:177]   - Using image docker.io/busybox:stable
	I0520 10:21:28.649026    8148 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 10:21:28.649081    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0520 10:21:28.649146    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.647470    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.647519    8148 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 10:21:28.649557    8148 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 10:21:28.649604    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:28.683429    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.695980    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:28.703036    8148 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0520 10:21:28.703141    8148 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 10:21:29.048626    8148 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0520 10:21:29.048697    8148 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0520 10:21:29.090901    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0520 10:21:29.120816    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0520 10:21:29.129012    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0520 10:21:29.181292    8148 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0520 10:21:29.181316    8148 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0520 10:21:29.197343    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 10:21:29.234964    8148 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0520 10:21:29.234986    8148 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0520 10:21:29.247344    8148 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0520 10:21:29.247365    8148 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0520 10:21:29.251521    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0520 10:21:29.259574    8148 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 10:21:29.259645    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0520 10:21:29.265853    8148 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0520 10:21:29.265924    8148 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0520 10:21:29.311359    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 10:21:29.338376    8148 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0520 10:21:29.338451    8148 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0520 10:21:29.533216    8148 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0520 10:21:29.533290    8148 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0520 10:21:29.547478    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0520 10:21:29.585894    8148 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0520 10:21:29.585966    8148 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0520 10:21:29.630388    8148 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 10:21:29.630476    8148 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 10:21:29.639512    8148 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0520 10:21:29.639594    8148 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0520 10:21:29.739912    8148 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0520 10:21:29.739983    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0520 10:21:29.779553    8148 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0520 10:21:29.779629    8148 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0520 10:21:29.977042    8148 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0520 10:21:29.977125    8148 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0520 10:21:29.988438    8148 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0520 10:21:29.988511    8148 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0520 10:21:30.054920    8148 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 10:21:30.054999    8148 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 10:21:30.124214    8148 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0520 10:21:30.124290    8148 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0520 10:21:30.381734    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0520 10:21:30.384809    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 10:21:30.411549    8148 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0520 10:21:30.411627    8148 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0520 10:21:30.417800    8148 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0520 10:21:30.417871    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0520 10:21:30.508777    8148 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:21:30.508848    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0520 10:21:30.541808    8148 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0520 10:21:30.541879    8148 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0520 10:21:30.716181    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0520 10:21:30.737805    8148 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0520 10:21:30.737878    8148 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0520 10:21:30.767278    8148 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0520 10:21:30.767354    8148 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0520 10:21:30.774636    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:21:30.869474    8148 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.16631031s)
	I0520 10:21:30.870324    8148 node_ready.go:35] waiting up to 6m0s for node "addons-988376" to be "Ready" ...
	I0520 10:21:30.870589    8148 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.167525175s)
	I0520 10:21:30.870644    8148 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0520 10:21:30.875875    8148 node_ready.go:49] node "addons-988376" has status "Ready":"True"
	I0520 10:21:30.875902    8148 node_ready.go:38] duration metric: took 5.552879ms for node "addons-988376" to be "Ready" ...
	I0520 10:21:30.875912    8148 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:21:30.886967    8148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6j4lg" in "kube-system" namespace to be "Ready" ...
	I0520 10:21:31.084485    8148 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0520 10:21:31.084567    8148 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0520 10:21:31.139267    8148 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0520 10:21:31.139344    8148 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0520 10:21:31.374461    8148 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-988376" context rescaled to 1 replicas
	I0520 10:21:31.426108    8148 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0520 10:21:31.426177    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0520 10:21:31.470543    8148 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 10:21:31.470614    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0520 10:21:31.876371    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.78538725s)
	I0520 10:21:31.876536    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.74744718s)
	I0520 10:21:31.876562    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.755618458s)
	I0520 10:21:31.876640    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.679223265s)
	I0520 10:21:31.970669    8148 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0520 10:21:31.970741    8148 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0520 10:21:32.023233    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0520 10:21:32.280272    8148 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0520 10:21:32.280340    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0520 10:21:32.547594    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.295991955s)
	I0520 10:21:32.747671    8148 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0520 10:21:32.747767    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0520 10:21:32.792567    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.481128357s)
	I0520 10:21:32.922101    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-6j4lg" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:33.180338    8148 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 10:21:33.180413    8148 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0520 10:21:33.504292    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0520 10:21:34.971272    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-6j4lg" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:35.360200    8148 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0520 10:21:35.360352    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:35.386930    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:36.147722    8148 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0520 10:21:36.267160    8148 addons.go:234] Setting addon gcp-auth=true in "addons-988376"
	I0520 10:21:36.267256    8148 host.go:66] Checking if "addons-988376" exists ...
	I0520 10:21:36.267791    8148 cli_runner.go:164] Run: docker container inspect addons-988376 --format={{.State.Status}}
	I0520 10:21:36.303857    8148 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0520 10:21:36.303909    8148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-988376
	I0520 10:21:36.328640    8148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/addons-988376/id_rsa Username:docker}
	I0520 10:21:37.286490    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.738924798s)
	I0520 10:21:37.286597    8148 addons.go:470] Verifying addon ingress=true in "addons-988376"
	I0520 10:21:37.289307    8148 out.go:177] * Verifying ingress addon...
	I0520 10:21:37.286857    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (6.905050095s)
	I0520 10:21:37.286924    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.902049862s)
	I0520 10:21:37.286957    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.57070779s)
	I0520 10:21:37.287040    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.512326925s)
	I0520 10:21:37.287132    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.263818988s)
	I0520 10:21:37.289475    8148 addons.go:470] Verifying addon registry=true in "addons-988376"
	W0520 10:21:37.289658    8148 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 10:21:37.289681    8148 retry.go:31] will retry after 355.090091ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0520 10:21:37.292900    8148 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0520 10:21:37.289495    8148 addons.go:470] Verifying addon metrics-server=true in "addons-988376"
	I0520 10:21:37.295312    8148 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-988376 service yakd-dashboard -n yakd-dashboard
	
	I0520 10:21:37.297513    8148 out.go:177] * Verifying registry addon...
	I0520 10:21:37.298764    8148 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0520 10:21:37.300346    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:37.301038    8148 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0520 10:21:37.306494    8148 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0520 10:21:37.306556    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:37.394109    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-6j4lg" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:37.644968    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0520 10:21:37.797484    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:37.805641    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:38.298263    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:38.305633    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:38.797622    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:38.805699    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:39.095649    8148 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.791757834s)
	I0520 10:21:39.098295    8148 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0520 10:21:39.095981    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.591602106s)
	I0520 10:21:39.100885    8148 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-988376"
	I0520 10:21:39.103556    8148 out.go:177] * Verifying csi-hostpath-driver addon...
	I0520 10:21:39.105934    8148 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0520 10:21:39.108840    8148 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0520 10:21:39.108867    8148 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0520 10:21:39.106779    8148 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0520 10:21:39.152687    8148 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0520 10:21:39.152714    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:39.183237    8148 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0520 10:21:39.183268    8148 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0520 10:21:39.228172    8148 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 10:21:39.228196    8148 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0520 10:21:39.250121    8148 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0520 10:21:39.316093    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:39.328399    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:39.615166    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:39.622886    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.977815652s)
	I0520 10:21:39.797500    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:39.805992    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:39.893035    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-6j4lg" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:40.120693    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:40.306160    8148 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.055999232s)
	I0520 10:21:40.308702    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:40.309570    8148 addons.go:470] Verifying addon gcp-auth=true in "addons-988376"
	I0520 10:21:40.311881    8148 out.go:177] * Verifying gcp-auth addon...
	I0520 10:21:40.314990    8148 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0520 10:21:40.318784    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:40.319150    8148 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0520 10:21:40.319159    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:40.615021    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:40.796951    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:40.806333    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:40.818484    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:40.893571    8148 pod_ready.go:97] pod "coredns-7db6d8ff4d-6j4lg" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-20 10:21:40 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-20 10:21:28 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-20 10:21:28 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-20 10:21:28 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-20 10:21:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP: PodIPs:[] StartTime:2024-05-20 10:21:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-20 10:21:30 +0000 UTC,FinishedAt:2024-05-20 10:21:40 +0000 UTC,ContainerID:docker://81f2c7d61127a08d17a6a41dafdb39f5812dfc921f364097e660886c5bfcf762,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://81f2c7d61127a08d17a6a41dafdb39f5812dfc921f364097e660886c5bfcf762 Started:0x40015dbe40 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0520 10:21:40.893651    8148 pod_ready.go:81] duration metric: took 10.006594592s for pod "coredns-7db6d8ff4d-6j4lg" in "kube-system" namespace to be "Ready" ...
	E0520 10:21:40.893678    8148 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-6j4lg" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-20 10:21:40 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-20 10:21:28 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-20 10:21:28 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-20 10:21:28 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-20 10:21:28 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP: PodIPs:[] StartTime:2024-05-20 10:21:28 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-20 10:21:30 +0000 UTC,FinishedAt:2024-05-20 10:21:40 +0000 UTC,ContainerID:docker://81f2c7d61127a08d17a6a41dafdb39f5812dfc921f364097e660886c5bfcf762,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://81f2c7d61127a08d17a6a41dafdb39f5812dfc921f364097e660886c5bfcf762 Started:0x40015dbe40 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0520 10:21:40.893715    8148 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace to be "Ready" ...
	I0520 10:21:41.114382    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:41.298197    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:41.305721    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:41.318887    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:41.614951    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:41.806095    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:41.807025    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:41.818728    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:42.116396    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:42.298010    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:42.307438    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:42.319634    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:42.615535    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:42.798119    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:42.806395    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:42.819493    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:42.900358    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:43.116221    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:43.297293    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:43.305528    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:43.319595    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:43.614218    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:43.797357    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:43.805208    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:43.818446    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:44.115898    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:44.297193    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:44.305175    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:44.318127    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:44.614301    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:44.797288    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:44.805328    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:44.818575    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:44.900759    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:45.121083    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:45.298487    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:45.307263    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:45.318733    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:45.615354    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:45.797594    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:45.805869    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:45.819354    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:46.115414    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:46.297793    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:46.308259    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:46.318919    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:46.614885    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:46.797902    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:46.806608    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:46.818863    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:47.114866    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:47.297152    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:47.305452    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:47.319113    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:47.400412    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:47.614900    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:47.797105    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:47.805522    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:47.818790    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:48.115352    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:48.298207    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:48.308110    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:48.318572    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:48.615194    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:48.797131    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:48.805216    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:48.818668    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:49.114368    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:49.298756    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:49.307458    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:49.319176    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:49.401248    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:49.614678    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:49.798127    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:49.817773    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:49.821521    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:50.115042    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:50.297467    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:50.305860    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:50.319279    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:50.616260    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:50.797696    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:50.806216    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:50.818969    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:51.116224    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:51.298292    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:51.306157    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:51.318516    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:51.615277    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:51.797479    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:51.805950    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:51.818985    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:51.905973    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:52.115173    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:52.298742    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:52.307483    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:52.319076    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:52.614725    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:52.797933    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:52.809893    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:52.820427    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:53.125906    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:53.298137    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:53.305786    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:53.319117    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:53.615342    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:53.798894    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:53.806337    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:53.818995    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:54.114931    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:54.297131    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:54.305132    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:54.318263    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:54.399995    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:54.615281    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:54.798408    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:54.807485    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:54.820544    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:55.115761    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:55.297135    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:55.305777    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:55.319241    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:55.615501    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:55.797509    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:55.806294    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:55.818748    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:56.115442    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:56.297365    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:56.305404    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:56.319483    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:56.400578    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:56.615128    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:56.797832    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:56.806467    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:56.819127    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:57.115435    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:57.297734    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:57.306460    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:57.319015    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:57.615179    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:57.797446    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:57.805610    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:57.818297    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:58.114896    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:58.297716    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:58.305727    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:58.325234    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:58.616142    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:58.797386    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:58.805800    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0520 10:21:58.819284    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:58.900629    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:21:59.115154    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:59.297506    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:59.306129    8148 kapi.go:107] duration metric: took 22.005086265s to wait for kubernetes.io/minikube-addons=registry ...
	I0520 10:21:59.318371    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:21:59.614147    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:21:59.797337    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:21:59.818806    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:00.124149    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:00.300773    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:00.320653    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:00.616349    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:00.802986    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:00.821191    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:01.115532    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:01.299195    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:01.319779    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:01.412619    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:01.615678    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:01.803121    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:01.819436    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:02.120149    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:02.302189    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:02.324324    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:02.618324    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:02.799275    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:02.818793    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:03.115558    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:03.297891    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:03.319503    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:03.614118    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:03.797649    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:03.819288    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:03.901098    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:04.115088    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:04.297710    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:04.329959    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:04.614331    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:04.804498    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:04.818509    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:05.115550    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:05.304732    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:05.332332    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:05.614787    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:05.797276    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:05.818537    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:06.119610    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:06.297992    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:06.319448    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:06.400676    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:06.614892    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:06.797796    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:06.818724    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:07.116701    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:07.298770    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:07.320325    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:07.617913    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:07.802904    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:07.819220    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:08.114551    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:08.298351    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:08.319138    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:08.615521    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:08.798761    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:08.820302    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:08.906545    8148 pod_ready.go:102] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"False"
	I0520 10:22:09.116433    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:09.297531    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:09.318932    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:09.404024    8148 pod_ready.go:92] pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:09.404101    8148 pod_ready.go:81] duration metric: took 28.510357957s for pod "coredns-7db6d8ff4d-b5shk" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:09.404185    8148 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-988376" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:09.413843    8148 pod_ready.go:92] pod "etcd-addons-988376" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:09.413921    8148 pod_ready.go:81] duration metric: took 9.703405ms for pod "etcd-addons-988376" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:09.413951    8148 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-988376" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:09.421675    8148 pod_ready.go:92] pod "kube-apiserver-addons-988376" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:09.421754    8148 pod_ready.go:81] duration metric: took 7.781387ms for pod "kube-apiserver-addons-988376" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:09.421784    8148 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-988376" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:09.429432    8148 pod_ready.go:92] pod "kube-controller-manager-addons-988376" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:09.429497    8148 pod_ready.go:81] duration metric: took 7.691851ms for pod "kube-controller-manager-addons-988376" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:09.429526    8148 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgqqg" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:09.435814    8148 pod_ready.go:92] pod "kube-proxy-jgqqg" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:09.435925    8148 pod_ready.go:81] duration metric: took 6.370005ms for pod "kube-proxy-jgqqg" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:09.435976    8148 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-988376" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:09.615727    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:09.797008    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:09.798713    8148 pod_ready.go:92] pod "kube-scheduler-addons-988376" in "kube-system" namespace has status "Ready":"True"
	I0520 10:22:09.798776    8148 pod_ready.go:81] duration metric: took 362.75864ms for pod "kube-scheduler-addons-988376" in "kube-system" namespace to be "Ready" ...
	I0520 10:22:09.798831    8148 pod_ready.go:38] duration metric: took 38.92287427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 10:22:09.798863    8148 api_server.go:52] waiting for apiserver process to appear ...
	I0520 10:22:09.798942    8148 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:22:09.822150    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:09.835193    8148 api_server.go:72] duration metric: took 41.744072367s to wait for apiserver process to appear ...
	I0520 10:22:09.835229    8148 api_server.go:88] waiting for apiserver healthz status ...
	I0520 10:22:09.835264    8148 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0520 10:22:09.847244    8148 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0520 10:22:09.848427    8148 api_server.go:141] control plane version: v1.30.1
	I0520 10:22:09.848451    8148 api_server.go:131] duration metric: took 13.214844ms to wait for apiserver health ...
	I0520 10:22:09.848460    8148 system_pods.go:43] waiting for kube-system pods to appear ...
	I0520 10:22:10.004894    8148 system_pods.go:59] 17 kube-system pods found
	I0520 10:22:10.004941    8148 system_pods.go:61] "coredns-7db6d8ff4d-b5shk" [ff0dabdc-59b8-4a5c-88ae-e8260647153a] Running
	I0520 10:22:10.004951    8148 system_pods.go:61] "csi-hostpath-attacher-0" [ab10fb57-eed4-4187-8d37-8d386b9788d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0520 10:22:10.004960    8148 system_pods.go:61] "csi-hostpath-resizer-0" [1b404dbd-a7fe-48ce-8f19-5aa17759d2e1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0520 10:22:10.004970    8148 system_pods.go:61] "csi-hostpathplugin-cv6nc" [db9c9344-5483-431c-ad4c-3af160f466c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0520 10:22:10.004976    8148 system_pods.go:61] "etcd-addons-988376" [f0c7c3e2-04b6-4119-bf9f-802c6aef5cb1] Running
	I0520 10:22:10.004981    8148 system_pods.go:61] "kube-apiserver-addons-988376" [97e82841-0b04-4d1d-b82d-d5233a22d694] Running
	I0520 10:22:10.004985    8148 system_pods.go:61] "kube-controller-manager-addons-988376" [6de6c116-02d0-4138-80f8-68904aa52cbc] Running
	I0520 10:22:10.004991    8148 system_pods.go:61] "kube-ingress-dns-minikube" [f36e2e50-4a79-49e3-9b00-48feab710d6d] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0520 10:22:10.004995    8148 system_pods.go:61] "kube-proxy-jgqqg" [05c69b36-f98a-48ad-894f-38699437a8fa] Running
	I0520 10:22:10.004999    8148 system_pods.go:61] "kube-scheduler-addons-988376" [7c59e30d-9ed3-4747-bc48-9ff4c3697660] Running
	I0520 10:22:10.005002    8148 system_pods.go:61] "metrics-server-c59844bb4-nrn8r" [97655ebe-e88e-4796-9095-eb1b95ded83d] Running
	I0520 10:22:10.005009    8148 system_pods.go:61] "nvidia-device-plugin-daemonset-r7h5n" [27fcc97d-9dd2-482d-9f4c-21e0b23cca10] Running
	I0520 10:22:10.005012    8148 system_pods.go:61] "registry-proxy-pbt6j" [a0627700-0f62-473d-9b8b-54789b3fdc5e] Running
	I0520 10:22:10.005015    8148 system_pods.go:61] "registry-tbqsk" [6f6cb288-6543-4600-9365-2eddbbfb91ea] Running
	I0520 10:22:10.005019    8148 system_pods.go:61] "snapshot-controller-745499f584-cwwvf" [fb1b2bfa-0943-4d20-93ff-69db82c39084] Running
	I0520 10:22:10.005023    8148 system_pods.go:61] "snapshot-controller-745499f584-z5jht" [7878a4b3-6a87-49a7-a765-918486a266a9] Running
	I0520 10:22:10.005026    8148 system_pods.go:61] "storage-provisioner" [5f7c601c-1f01-4efb-a8c4-d50e8e61a509] Running
	I0520 10:22:10.005032    8148 system_pods.go:74] duration metric: took 156.567297ms to wait for pod list to return data ...
	I0520 10:22:10.005041    8148 default_sa.go:34] waiting for default service account to be created ...
	I0520 10:22:10.115486    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:10.196962    8148 default_sa.go:45] found service account: "default"
	I0520 10:22:10.196987    8148 default_sa.go:55] duration metric: took 191.938295ms for default service account to be created ...
	I0520 10:22:10.196997    8148 system_pods.go:116] waiting for k8s-apps to be running ...
	I0520 10:22:10.297407    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:10.318430    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:10.405726    8148 system_pods.go:86] 17 kube-system pods found
	I0520 10:22:10.405810    8148 system_pods.go:89] "coredns-7db6d8ff4d-b5shk" [ff0dabdc-59b8-4a5c-88ae-e8260647153a] Running
	I0520 10:22:10.405836    8148 system_pods.go:89] "csi-hostpath-attacher-0" [ab10fb57-eed4-4187-8d37-8d386b9788d9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0520 10:22:10.405860    8148 system_pods.go:89] "csi-hostpath-resizer-0" [1b404dbd-a7fe-48ce-8f19-5aa17759d2e1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0520 10:22:10.405891    8148 system_pods.go:89] "csi-hostpathplugin-cv6nc" [db9c9344-5483-431c-ad4c-3af160f466c4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0520 10:22:10.405912    8148 system_pods.go:89] "etcd-addons-988376" [f0c7c3e2-04b6-4119-bf9f-802c6aef5cb1] Running
	I0520 10:22:10.405933    8148 system_pods.go:89] "kube-apiserver-addons-988376" [97e82841-0b04-4d1d-b82d-d5233a22d694] Running
	I0520 10:22:10.405964    8148 system_pods.go:89] "kube-controller-manager-addons-988376" [6de6c116-02d0-4138-80f8-68904aa52cbc] Running
	I0520 10:22:10.405993    8148 system_pods.go:89] "kube-ingress-dns-minikube" [f36e2e50-4a79-49e3-9b00-48feab710d6d] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0520 10:22:10.406018    8148 system_pods.go:89] "kube-proxy-jgqqg" [05c69b36-f98a-48ad-894f-38699437a8fa] Running
	I0520 10:22:10.406053    8148 system_pods.go:89] "kube-scheduler-addons-988376" [7c59e30d-9ed3-4747-bc48-9ff4c3697660] Running
	I0520 10:22:10.406073    8148 system_pods.go:89] "metrics-server-c59844bb4-nrn8r" [97655ebe-e88e-4796-9095-eb1b95ded83d] Running
	I0520 10:22:10.406108    8148 system_pods.go:89] "nvidia-device-plugin-daemonset-r7h5n" [27fcc97d-9dd2-482d-9f4c-21e0b23cca10] Running
	I0520 10:22:10.406140    8148 system_pods.go:89] "registry-proxy-pbt6j" [a0627700-0f62-473d-9b8b-54789b3fdc5e] Running
	I0520 10:22:10.406159    8148 system_pods.go:89] "registry-tbqsk" [6f6cb288-6543-4600-9365-2eddbbfb91ea] Running
	I0520 10:22:10.406180    8148 system_pods.go:89] "snapshot-controller-745499f584-cwwvf" [fb1b2bfa-0943-4d20-93ff-69db82c39084] Running
	I0520 10:22:10.406221    8148 system_pods.go:89] "snapshot-controller-745499f584-z5jht" [7878a4b3-6a87-49a7-a765-918486a266a9] Running
	I0520 10:22:10.406238    8148 system_pods.go:89] "storage-provisioner" [5f7c601c-1f01-4efb-a8c4-d50e8e61a509] Running
	I0520 10:22:10.406261    8148 system_pods.go:126] duration metric: took 209.258121ms to wait for k8s-apps to be running ...
	I0520 10:22:10.406291    8148 system_svc.go:44] waiting for kubelet service to be running ....
	I0520 10:22:10.406382    8148 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:22:10.428241    8148 system_svc.go:56] duration metric: took 21.950285ms WaitForService to wait for kubelet
	I0520 10:22:10.428328    8148 kubeadm.go:576] duration metric: took 42.337210798s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 10:22:10.428370    8148 node_conditions.go:102] verifying NodePressure condition ...
	I0520 10:22:10.597986    8148 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0520 10:22:10.598068    8148 node_conditions.go:123] node cpu capacity is 2
	I0520 10:22:10.598096    8148 node_conditions.go:105] duration metric: took 169.687631ms to run NodePressure ...
	I0520 10:22:10.598122    8148 start.go:240] waiting for startup goroutines ...
	I0520 10:22:10.615520    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:10.799351    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:10.819730    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:11.115505    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:11.298046    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:11.318359    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:11.615441    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:11.797637    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:11.818521    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:12.115871    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:12.298040    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:12.318819    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:12.615064    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:12.797708    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:12.819430    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:13.116400    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:13.298910    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:13.320123    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:13.614873    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:13.798253    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:13.819747    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:14.115501    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:14.298934    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:14.321131    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:14.615411    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:14.798003    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:14.818924    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:15.115601    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:15.305133    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:15.323213    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:15.615502    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:15.797311    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:15.822588    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:16.115298    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:16.297682    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:16.319262    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:16.617396    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:16.797691    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:16.818834    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:17.114495    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:17.297789    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:17.318119    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:17.614148    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:17.797769    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:17.819325    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:18.114486    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:18.297643    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:18.318834    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:18.614029    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:18.798264    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:18.818408    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:19.118093    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:19.297124    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:19.318494    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:19.616155    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:19.797214    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:19.818535    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:20.114362    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:20.297514    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:20.318969    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:20.614022    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:20.798737    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:20.818912    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:21.115558    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:21.298048    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:21.318563    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:21.614646    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:21.797957    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:21.818228    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:22.114761    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:22.297747    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:22.320255    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:22.615081    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0520 10:22:22.797163    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:22.818536    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:23.115138    8148 kapi.go:107] duration metric: took 44.00835903s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0520 10:22:23.297468    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:23.318765    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:23.797938    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:23.819313    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:24.297830    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:24.320434    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:24.796892    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:24.819157    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:25.298064    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:25.318383    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:25.796775    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:25.819025    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:26.297624    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:26.319228    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:26.796866    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:26.819065    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:27.297670    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:27.318888    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:27.797103    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:27.818584    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:28.298085    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:28.318570    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:28.797199    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:28.818557    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:29.297120    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:29.318678    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:29.797207    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:29.818523    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:30.297705    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:30.318704    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:30.797031    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:30.818691    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:31.298008    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:31.318724    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:31.797173    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:31.818938    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:32.298070    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:32.318649    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:32.797581    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:32.818902    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:33.297192    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:33.319144    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:33.796934    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:33.819106    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:34.297576    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:34.318801    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:34.797384    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:34.818640    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:35.297397    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:35.318653    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:35.797768    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:35.819201    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:36.297849    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:36.319210    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:36.798009    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:36.818314    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:37.296862    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:37.319580    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:37.797486    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:37.818631    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:38.297323    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:38.318722    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:38.797042    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:38.818506    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:39.297704    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:39.318789    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:39.797215    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:39.818671    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:40.297825    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:40.319040    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:40.797234    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:40.818172    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:41.297075    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:41.318535    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:41.797928    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:41.819079    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:42.297737    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:42.319488    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:42.799276    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:42.819683    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:43.298265    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:43.318865    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:43.797195    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:43.818390    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:44.298949    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:44.318074    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:44.797806    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:44.819319    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:45.298118    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:45.319245    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:45.798430    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:45.820451    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:46.299544    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:46.319127    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:46.798203    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:46.818193    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:47.297519    8148 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0520 10:22:47.318838    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:47.798100    8148 kapi.go:107] duration metric: took 1m10.505201439s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0520 10:22:47.818738    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:48.319029    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:48.818378    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:49.318534    8148 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0520 10:22:49.818162    8148 kapi.go:107] duration metric: took 1m9.503168729s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0520 10:22:49.820342    8148 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-988376 cluster.
	I0520 10:22:49.822721    8148 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0520 10:22:49.825115    8148 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0520 10:22:49.827164    8148 out.go:177] * Enabled addons: ingress-dns, cloud-spanner, nvidia-device-plugin, default-storageclass, storage-provisioner-rancher, storage-provisioner, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0520 10:22:49.829198    8148 addons.go:505] duration metric: took 1m21.737781046s for enable addons: enabled=[ingress-dns cloud-spanner nvidia-device-plugin default-storageclass storage-provisioner-rancher storage-provisioner inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0520 10:22:49.829243    8148 start.go:245] waiting for cluster config update ...
	I0520 10:22:49.829264    8148 start.go:254] writing updated cluster config ...
	I0520 10:22:49.829891    8148 ssh_runner.go:195] Run: rm -f paused
	I0520 10:22:50.187832    8148 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0520 10:22:50.190345    8148 out.go:177] * Done! kubectl is now configured to use "addons-988376" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 20 10:23:54 addons-988376 dockerd[1137]: time="2024-05-20T10:23:54.272184661Z" level=info msg="ignoring event" container=ff9ed37a73877b141000beeafe30b4094888ef7b8c7e5b58965f57742545e4b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:23:54 addons-988376 dockerd[1137]: time="2024-05-20T10:23:54.442916865Z" level=info msg="ignoring event" container=d6290c3a7b11086b5f88e0264feeb45f1864f312acce75881610e63ea4359b58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:23:55 addons-988376 cri-dockerd[1349]: time="2024-05-20T10:23:55Z" level=error msg="error getting RW layer size for container ID 'ff9ed37a73877b141000beeafe30b4094888ef7b8c7e5b58965f57742545e4b7': Error response from daemon: No such container: ff9ed37a73877b141000beeafe30b4094888ef7b8c7e5b58965f57742545e4b7"
	May 20 10:23:55 addons-988376 cri-dockerd[1349]: time="2024-05-20T10:23:55Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ff9ed37a73877b141000beeafe30b4094888ef7b8c7e5b58965f57742545e4b7'"
	May 20 10:23:55 addons-988376 cri-dockerd[1349]: time="2024-05-20T10:23:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4bd6491e31bb9248713ed2c4cdf3c64e55434a4310a374419ed6be4e929378bd/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	May 20 10:23:55 addons-988376 dockerd[1137]: time="2024-05-20T10:23:55.784305123Z" level=warning msg="reference for unknown type: " digest="sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474" remote="ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474" spanID=e87998d82f75bae8 traceID=b2b62e622b910082749557dd0ff1cdbd
	May 20 10:23:58 addons-988376 dockerd[1137]: time="2024-05-20T10:23:58.013004604Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=17f18c98f9fe8b6bbeb3e8255b6ba7ab384a11c8a277dfa614badde5124684da spanID=c6104f63f7ec79e4 traceID=73bfe113ac2b171455cb1a7b7478aca1
	May 20 10:23:58 addons-988376 dockerd[1137]: time="2024-05-20T10:23:58.071937103Z" level=info msg="ignoring event" container=17f18c98f9fe8b6bbeb3e8255b6ba7ab384a11c8a277dfa614badde5124684da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:23:58 addons-988376 dockerd[1137]: time="2024-05-20T10:23:58.203316023Z" level=info msg="ignoring event" container=4b23da9cfbc524d9941a4ac435e85fa33d5784e72ba394ae54166c081b85059a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:23:58 addons-988376 cri-dockerd[1349]: time="2024-05-20T10:23:58Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.23.2@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474"
	May 20 10:24:11 addons-988376 dockerd[1137]: time="2024-05-20T10:24:11.771269893Z" level=info msg="ignoring event" container=04e439f70cd3bcf3ea914d62949466e41625bb6a3b51ce9985e2df2e20a2bfa8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:24:18 addons-988376 cri-dockerd[1349]: time="2024-05-20T10:24:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6597ff74f5c05c601c025f26e2fdcdd1ac99df869f86c6a6cae0f4a77fe0cdc2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	May 20 10:24:18 addons-988376 dockerd[1137]: time="2024-05-20T10:24:18.388977701Z" level=info msg="ignoring event" container=0f7663b4fc36c37419e6b66c7604894c976985f878207a0aea285d9c8ce22932 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:24:18 addons-988376 dockerd[1137]: time="2024-05-20T10:24:18.505484120Z" level=info msg="ignoring event" container=04113a82c480b97d6b0eab16afdcb90efbb95c0e2af1c934fec078c298e19aa6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:24:19 addons-988376 cri-dockerd[1349]: time="2024-05-20T10:24:19Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	May 20 10:24:27 addons-988376 cri-dockerd[1349]: time="2024-05-20T10:24:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/43b2e663be2384ab8df12fb31ebfa271438b270fc19c51ca32bba10ad0a2a8ac/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	May 20 10:24:29 addons-988376 cri-dockerd[1349]: time="2024-05-20T10:24:29Z" level=info msg="Stop pulling image gcr.io/google-samples/hello-app:1.0: Status: Downloaded newer image for gcr.io/google-samples/hello-app:1.0"
	May 20 10:24:29 addons-988376 dockerd[1137]: time="2024-05-20T10:24:29.933798476Z" level=info msg="ignoring event" container=fae4e5258299fd48899ba2dcdeb037e7b6785240148d94c07a77ce6ee66e8340 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:24:31 addons-988376 dockerd[1137]: time="2024-05-20T10:24:31.071112427Z" level=info msg="ignoring event" container=06f1bc007ba7ac499f9424cd03982f1155273ac8c24668d0d803345d20e86e45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:24:43 addons-988376 dockerd[1137]: time="2024-05-20T10:24:43.068810404Z" level=info msg="ignoring event" container=da1dd6284894bb517305c3ca2b3156a3358ec2a4c4505d209668d65821d802e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:24:47 addons-988376 dockerd[1137]: time="2024-05-20T10:24:47.073336696Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=4fb7f9c185dc30491e42da84494fcfcded19f510e83db873d73018ba80ed9673 spanID=6f3878d0802f4165 traceID=26473b357e45828ca92c1c8d6f14e222
	May 20 10:24:47 addons-988376 dockerd[1137]: time="2024-05-20T10:24:47.120534268Z" level=info msg="ignoring event" container=4fb7f9c185dc30491e42da84494fcfcded19f510e83db873d73018ba80ed9673 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:24:47 addons-988376 cri-dockerd[1349]: time="2024-05-20T10:24:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"ingress-nginx-controller-768f948f8f-8jmww_ingress-nginx\": unexpected command output nsenter: cannot open /proc/8020/ns/net: No such file or directory\n with error: exit status 1"
	May 20 10:24:47 addons-988376 dockerd[1137]: time="2024-05-20T10:24:47.288426606Z" level=info msg="ignoring event" container=3bf22b10b4266650d15cd290b421dcbfca1d6f69bb4f99b21df8d623ae6d948e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 20 10:24:48 addons-988376 dockerd[1137]: time="2024-05-20T10:24:48.238258395Z" level=info msg="ignoring event" container=d187d789ba0c6490808e5a61b454d7b754328eb2ee4a1e416a72c71c99e8f8a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d187d789ba0c6       dd1b12fcb6097                                                                                                                3 seconds ago       Exited              hello-world-app           2                   43b2e663be238       hello-world-app-86c47465fc-nnzw6
	b4ab88cf8e386       nginx@sha256:516475cc129da42866742567714ddc681e5eed7b9ee0b9e9c015e464b4221a00                                                32 seconds ago      Running             nginx                     0                   6597ff74f5c05       nginx
	0d79732e12925       ghcr.io/headlamp-k8s/headlamp@sha256:34d59bf120f98415e3a69401f6636032a0dc39e1dbfcff149c09591de0fad474                        53 seconds ago      Running             headlamp                  0                   4bd6491e31bb9       headlamp-68456f997b-4grst
	5759982e0bb80       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 2 minutes ago       Running             gcp-auth                  0                   9005b658bcfa0       gcp-auth-5db96cd9b4-j68jr
	be0215d7819e7       296b5f799fcd8                                                                                                                2 minutes ago       Exited              patch                     1                   db2855e6f5fb6       ingress-nginx-admission-patch-d7jch
	f65ae58c1429e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366   2 minutes ago       Exited              create                    0                   074abea1720e4       ingress-nginx-admission-create-nwhlb
	6229b43923835       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        2 minutes ago       Running             yakd                      0                   ddf9904a1e46a       yakd-dashboard-5ddbf7d777-s8qxl
	638c19435555b       ba04bb24b9575                                                                                                                3 minutes ago       Running             storage-provisioner       0                   57e62962d3228       storage-provisioner
	d295771d2a058       2437cf7621777                                                                                                                3 minutes ago       Running             coredns                   0                   0f7cffe584dec       coredns-7db6d8ff4d-b5shk
	53d0615bc6ca7       05eccb821e159                                                                                                                3 minutes ago       Running             kube-proxy                0                   16ee9945862a0       kube-proxy-jgqqg
	88eb6d1e3ea4a       014faa467e297                                                                                                                3 minutes ago       Running             etcd                      0                   74eab8fe48cb0       etcd-addons-988376
	dc9c67ae0f86c       234ac56e455be                                                                                                                3 minutes ago       Running             kube-controller-manager   0                   3c25b3b88dc49       kube-controller-manager-addons-988376
	7716976c0afc1       163ff818d154d                                                                                                                3 minutes ago       Running             kube-scheduler            0                   06859384a77ea       kube-scheduler-addons-988376
	ec1f433403d6e       988b55d423baf                                                                                                                3 minutes ago       Running             kube-apiserver            0                   91d737c70e477       kube-apiserver-addons-988376
	
	
	==> coredns [d295771d2a05] <==
	[INFO] 10.244.0.20:41453 - 47115 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004169s
	[INFO] 10.244.0.20:47239 - 63849 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002264969s
	[INFO] 10.244.0.20:41453 - 59706 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002254344s
	[INFO] 10.244.0.20:47239 - 36164 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001922048s
	[INFO] 10.244.0.20:41453 - 41989 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001728963s
	[INFO] 10.244.0.20:41453 - 47465 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000126885s
	[INFO] 10.244.0.20:47239 - 56718 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000063401s
	[INFO] 10.244.0.20:33345 - 18414 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00045946s
	[INFO] 10.244.0.20:33345 - 43277 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000158164s
	[INFO] 10.244.0.20:44562 - 55668 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000176774s
	[INFO] 10.244.0.20:33345 - 14229 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000045653s
	[INFO] 10.244.0.20:44562 - 38479 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00007653s
	[INFO] 10.244.0.20:33345 - 15394 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000208265s
	[INFO] 10.244.0.20:44562 - 34402 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000129142s
	[INFO] 10.244.0.20:33345 - 63937 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000174681s
	[INFO] 10.244.0.20:44562 - 14204 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000068341s
	[INFO] 10.244.0.20:33345 - 36010 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044603s
	[INFO] 10.244.0.20:44562 - 25887 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067816s
	[INFO] 10.244.0.20:44562 - 55998 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067463s
	[INFO] 10.244.0.20:33345 - 2046 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001408968s
	[INFO] 10.244.0.20:44562 - 25216 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001483365s
	[INFO] 10.244.0.20:33345 - 22925 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001824497s
	[INFO] 10.244.0.20:33345 - 49689 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000090963s
	[INFO] 10.244.0.20:44562 - 33904 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001011113s
	[INFO] 10.244.0.20:44562 - 37532 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068243s
	
	
	==> describe nodes <==
	Name:               addons-988376
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-988376
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=addons-988376
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T10_21_15_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-988376
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 10:21:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-988376
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 10:24:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 10:24:48 +0000   Mon, 20 May 2024 10:21:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 10:24:48 +0000   Mon, 20 May 2024 10:21:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 10:24:48 +0000   Mon, 20 May 2024 10:21:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 10:24:48 +0000   Mon, 20 May 2024 10:21:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-988376
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022428Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022428Ki
	  pods:               110
	System Info:
	  Machine ID:                 ea9a40a64074453ba0afd65739f8c266
	  System UUID:                4a1bbfe8-d43f-451a-a2e7-40bd4643cac8
	  Boot ID:                    360c613b-7d2d-4efb-a784-5066f036d5dd
	  Kernel Version:             5.15.0-1061-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-86c47465fc-nnzw6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  gcp-auth                    gcp-auth-5db96cd9b4-j68jr                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  headlamp                    headlamp-68456f997b-4grst                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 coredns-7db6d8ff4d-b5shk                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m24s
	  kube-system                 etcd-addons-988376                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-apiserver-addons-988376             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-controller-manager-addons-988376    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-proxy-jgqqg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kube-scheduler-addons-988376             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m20s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-s8qxl          0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     3m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (3%!)(MISSING)  426Mi (5%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m22s  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m45s  kubelet          Node addons-988376 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m37s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m37s  kubelet          Node addons-988376 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m37s  kubelet          Node addons-988376 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m37s  kubelet          Node addons-988376 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m37s  kubelet          Node addons-988376 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m27s  kubelet          Node addons-988376 status is now: NodeReady
	  Normal  RegisteredNode           3m25s  node-controller  Node addons-988376 event: Registered Node addons-988376 in Controller
	
	
	==> dmesg <==
	[May20 10:18] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014935] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.549683] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003176] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.019285] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.005149] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.004202] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.643027] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.563006] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [88eb6d1e3ea4] <==
	{"level":"info","ts":"2024-05-20T10:21:08.800043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-05-20T10:21:08.800127Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-05-20T10:21:08.800418Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-20T10:21:08.800478Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-20T10:21:08.800489Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-20T10:21:08.802496Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-20T10:21:08.80264Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-20T10:21:09.18911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-20T10:21:09.189158Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-20T10:21:09.189187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-20T10:21:09.189218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-20T10:21:09.189227Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-20T10:21:09.189238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-20T10:21:09.189246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-20T10:21:09.191713Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:21:09.194176Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-988376 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-20T10:21:09.194381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T10:21:09.194898Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:21:09.195105Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:21:09.195201Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-20T10:21:09.195295Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-20T10:21:09.198707Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-20T10:21:09.202284Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-20T10:21:09.202961Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-20T10:21:09.213439Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [5759982e0bb8] <==
	2024/05/20 10:22:48 GCP Auth Webhook started!
	2024/05/20 10:22:58 Ready to marshal response ...
	2024/05/20 10:22:58 Ready to write response ...
	2024/05/20 10:23:02 Ready to marshal response ...
	2024/05/20 10:23:02 Ready to write response ...
	2024/05/20 10:23:19 Ready to marshal response ...
	2024/05/20 10:23:19 Ready to write response ...
	2024/05/20 10:23:19 Ready to marshal response ...
	2024/05/20 10:23:19 Ready to write response ...
	2024/05/20 10:23:27 Ready to marshal response ...
	2024/05/20 10:23:27 Ready to write response ...
	2024/05/20 10:23:30 Ready to marshal response ...
	2024/05/20 10:23:30 Ready to write response ...
	2024/05/20 10:23:55 Ready to marshal response ...
	2024/05/20 10:23:55 Ready to write response ...
	2024/05/20 10:23:55 Ready to marshal response ...
	2024/05/20 10:23:55 Ready to write response ...
	2024/05/20 10:23:55 Ready to marshal response ...
	2024/05/20 10:23:55 Ready to write response ...
	2024/05/20 10:24:17 Ready to marshal response ...
	2024/05/20 10:24:17 Ready to write response ...
	2024/05/20 10:24:27 Ready to marshal response ...
	2024/05/20 10:24:27 Ready to write response ...
	
	
	==> kernel <==
	 10:24:52 up 6 min,  0 users,  load average: 1.07, 1.59, 0.78
	Linux addons-988376 5.15.0-1061-aws #67~20.04.1-Ubuntu SMP Wed Apr 17 15:09:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [ec1f433403d6] <==
	E0520 10:22:05.314219       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 10:22:05.411987       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0520 10:23:10.813877       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0520 10:23:43.400189       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0520 10:23:47.442318       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:23:47.442371       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:23:47.474143       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:23:47.474407       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:23:47.484613       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:23:47.484660       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:23:47.498312       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:23:47.498558       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0520 10:23:47.530684       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0520 10:23:47.530915       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0520 10:23:48.484752       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0520 10:23:48.530780       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0520 10:23:48.547528       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0520 10:23:55.095504       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.119.177"}
	I0520 10:24:11.695261       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0520 10:24:12.730906       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0520 10:24:17.471802       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0520 10:24:17.735013       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.39.61"}
	I0520 10:24:27.365520       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.38.8"}
	E0520 10:24:44.087947       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [dc9c67ae0f86] <==
	W0520 10:24:20.505349       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:24:20.505388       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0520 10:24:21.880046       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:24:21.880086       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 10:24:21.888511       1 namespace_controller.go:182] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0520 10:24:26.344429       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:24:26.344466       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 10:24:27.214551       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="34.624476ms"
	I0520 10:24:27.246779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="32.161728ms"
	I0520 10:24:27.247066       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="58.692µs"
	I0520 10:24:28.006907       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0520 10:24:28.006958       1 shared_informer.go:320] Caches are synced for resource quota
	I0520 10:24:28.454664       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0520 10:24:28.454711       1 shared_informer.go:320] Caches are synced for garbage collector
	W0520 10:24:29.100237       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:24:29.100277       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0520 10:24:30.961278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="40.148µs"
	I0520 10:24:31.979685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="73.979µs"
	I0520 10:24:32.996103       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="40.32µs"
	I0520 10:24:44.016104       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0520 10:24:44.027853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="8.304µs"
	I0520 10:24:44.028103       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0520 10:24:48.335141       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-86c47465fc" duration="39.024µs"
	W0520 10:24:50.292988       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0520 10:24:50.293026       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [53d0615bc6ca] <==
	I0520 10:21:29.908476       1 server_linux.go:69] "Using iptables proxy"
	I0520 10:21:29.925663       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0520 10:21:29.975311       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0520 10:21:29.975372       1 server_linux.go:165] "Using iptables Proxier"
	I0520 10:21:29.985838       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0520 10:21:29.985863       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0520 10:21:29.985892       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0520 10:21:29.986223       1 server.go:872] "Version info" version="v1.30.1"
	I0520 10:21:29.986251       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0520 10:21:29.995451       1 config.go:192] "Starting service config controller"
	I0520 10:21:29.995474       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0520 10:21:29.995523       1 config.go:101] "Starting endpoint slice config controller"
	I0520 10:21:29.995528       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0520 10:21:29.996322       1 config.go:319] "Starting node config controller"
	I0520 10:21:29.996334       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0520 10:21:30.096032       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0520 10:21:30.096103       1 shared_informer.go:320] Caches are synced for service config
	I0520 10:21:30.096369       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7716976c0afc] <==
	W0520 10:21:12.454888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 10:21:12.455330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0520 10:21:12.457003       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 10:21:12.457187       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0520 10:21:12.457661       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 10:21:12.457842       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0520 10:21:12.457725       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:21:12.457784       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:21:12.458041       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 10:21:12.458181       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0520 10:21:13.326908       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 10:21:13.327011       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0520 10:21:13.373121       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0520 10:21:13.373242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0520 10:21:13.431187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 10:21:13.431312       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0520 10:21:13.431425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 10:21:13.431445       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0520 10:21:13.454662       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 10:21:13.454957       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 10:21:13.570516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 10:21:13.570751       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0520 10:21:13.587519       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0520 10:21:13.587777       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0520 10:21:16.343774       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 20 10:24:31 addons-988376 kubelet[2251]: I0520 10:24:31.966484    2251 scope.go:117] "RemoveContainer" containerID="06f1bc007ba7ac499f9424cd03982f1155273ac8c24668d0d803345d20e86e45"
	May 20 10:24:31 addons-988376 kubelet[2251]: E0520 10:24:31.966896    2251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-nnzw6_default(7b1353ed-be92-4c66-9b10-6fb778ef0b28)\"" pod="default/hello-world-app-86c47465fc-nnzw6" podUID="7b1353ed-be92-4c66-9b10-6fb778ef0b28"
	May 20 10:24:32 addons-988376 kubelet[2251]: I0520 10:24:32.984480    2251 scope.go:117] "RemoveContainer" containerID="06f1bc007ba7ac499f9424cd03982f1155273ac8c24668d0d803345d20e86e45"
	May 20 10:24:32 addons-988376 kubelet[2251]: E0520 10:24:32.984737    2251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-nnzw6_default(7b1353ed-be92-4c66-9b10-6fb778ef0b28)\"" pod="default/hello-world-app-86c47465fc-nnzw6" podUID="7b1353ed-be92-4c66-9b10-6fb778ef0b28"
	May 20 10:24:40 addons-988376 kubelet[2251]: I0520 10:24:40.127674    2251 scope.go:117] "RemoveContainer" containerID="d1d7153de82a8425c79f8f88207d7062b3cb7b1ce0c79c86f728eae476ce72e9"
	May 20 10:24:40 addons-988376 kubelet[2251]: E0520 10:24:40.127985    2251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(f36e2e50-4a79-49e3-9b00-48feab710d6d)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="f36e2e50-4a79-49e3-9b00-48feab710d6d"
	May 20 10:24:43 addons-988376 kubelet[2251]: I0520 10:24:43.144532    2251 scope.go:117] "RemoveContainer" containerID="d1d7153de82a8425c79f8f88207d7062b3cb7b1ce0c79c86f728eae476ce72e9"
	May 20 10:24:43 addons-988376 kubelet[2251]: I0520 10:24:43.148795    2251 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf2k5\" (UniqueName: \"kubernetes.io/projected/f36e2e50-4a79-49e3-9b00-48feab710d6d-kube-api-access-xf2k5\") pod \"f36e2e50-4a79-49e3-9b00-48feab710d6d\" (UID: \"f36e2e50-4a79-49e3-9b00-48feab710d6d\") "
	May 20 10:24:43 addons-988376 kubelet[2251]: I0520 10:24:43.153081    2251 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f36e2e50-4a79-49e3-9b00-48feab710d6d-kube-api-access-xf2k5" (OuterVolumeSpecName: "kube-api-access-xf2k5") pod "f36e2e50-4a79-49e3-9b00-48feab710d6d" (UID: "f36e2e50-4a79-49e3-9b00-48feab710d6d"). InnerVolumeSpecName "kube-api-access-xf2k5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 10:24:43 addons-988376 kubelet[2251]: I0520 10:24:43.249767    2251 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-xf2k5\" (UniqueName: \"kubernetes.io/projected/f36e2e50-4a79-49e3-9b00-48feab710d6d-kube-api-access-xf2k5\") on node \"addons-988376\" DevicePath \"\""
	May 20 10:24:45 addons-988376 kubelet[2251]: I0520 10:24:45.138002    2251 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da96f8bb-cb4b-4338-8078-217f63c4295a" path="/var/lib/kubelet/pods/da96f8bb-cb4b-4338-8078-217f63c4295a/volumes"
	May 20 10:24:45 addons-988376 kubelet[2251]: I0520 10:24:45.139137    2251 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee18bf5b-5fa7-47e1-95bd-6d9768f1909b" path="/var/lib/kubelet/pods/ee18bf5b-5fa7-47e1-95bd-6d9768f1909b/volumes"
	May 20 10:24:45 addons-988376 kubelet[2251]: I0520 10:24:45.139520    2251 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f36e2e50-4a79-49e3-9b00-48feab710d6d" path="/var/lib/kubelet/pods/f36e2e50-4a79-49e3-9b00-48feab710d6d/volumes"
	May 20 10:24:47 addons-988376 kubelet[2251]: I0520 10:24:47.376646    2251 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zp4l\" (UniqueName: \"kubernetes.io/projected/a14cc33a-8982-4150-a355-e55c8883a568-kube-api-access-2zp4l\") pod \"a14cc33a-8982-4150-a355-e55c8883a568\" (UID: \"a14cc33a-8982-4150-a355-e55c8883a568\") "
	May 20 10:24:47 addons-988376 kubelet[2251]: I0520 10:24:47.376704    2251 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a14cc33a-8982-4150-a355-e55c8883a568-webhook-cert\") pod \"a14cc33a-8982-4150-a355-e55c8883a568\" (UID: \"a14cc33a-8982-4150-a355-e55c8883a568\") "
	May 20 10:24:47 addons-988376 kubelet[2251]: I0520 10:24:47.378964    2251 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a14cc33a-8982-4150-a355-e55c8883a568-kube-api-access-2zp4l" (OuterVolumeSpecName: "kube-api-access-2zp4l") pod "a14cc33a-8982-4150-a355-e55c8883a568" (UID: "a14cc33a-8982-4150-a355-e55c8883a568"). InnerVolumeSpecName "kube-api-access-2zp4l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 20 10:24:47 addons-988376 kubelet[2251]: I0520 10:24:47.382535    2251 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a14cc33a-8982-4150-a355-e55c8883a568-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a14cc33a-8982-4150-a355-e55c8883a568" (UID: "a14cc33a-8982-4150-a355-e55c8883a568"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	May 20 10:24:47 addons-988376 kubelet[2251]: I0520 10:24:47.477896    2251 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-2zp4l\" (UniqueName: \"kubernetes.io/projected/a14cc33a-8982-4150-a355-e55c8883a568-kube-api-access-2zp4l\") on node \"addons-988376\" DevicePath \"\""
	May 20 10:24:47 addons-988376 kubelet[2251]: I0520 10:24:47.477937    2251 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a14cc33a-8982-4150-a355-e55c8883a568-webhook-cert\") on node \"addons-988376\" DevicePath \"\""
	May 20 10:24:48 addons-988376 kubelet[2251]: I0520 10:24:48.127202    2251 scope.go:117] "RemoveContainer" containerID="06f1bc007ba7ac499f9424cd03982f1155273ac8c24668d0d803345d20e86e45"
	May 20 10:24:48 addons-988376 kubelet[2251]: I0520 10:24:48.303114    2251 scope.go:117] "RemoveContainer" containerID="4fb7f9c185dc30491e42da84494fcfcded19f510e83db873d73018ba80ed9673"
	May 20 10:24:48 addons-988376 kubelet[2251]: I0520 10:24:48.320535    2251 scope.go:117] "RemoveContainer" containerID="d187d789ba0c6490808e5a61b454d7b754328eb2ee4a1e416a72c71c99e8f8a0"
	May 20 10:24:48 addons-988376 kubelet[2251]: E0520 10:24:48.321278    2251 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-86c47465fc-nnzw6_default(7b1353ed-be92-4c66-9b10-6fb778ef0b28)\"" pod="default/hello-world-app-86c47465fc-nnzw6" podUID="7b1353ed-be92-4c66-9b10-6fb778ef0b28"
	May 20 10:24:48 addons-988376 kubelet[2251]: I0520 10:24:48.330547    2251 scope.go:117] "RemoveContainer" containerID="06f1bc007ba7ac499f9424cd03982f1155273ac8c24668d0d803345d20e86e45"
	May 20 10:24:49 addons-988376 kubelet[2251]: I0520 10:24:49.135021    2251 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a14cc33a-8982-4150-a355-e55c8883a568" path="/var/lib/kubelet/pods/a14cc33a-8982-4150-a355-e55c8883a568/volumes"
	
	
	==> storage-provisioner [638c19435555] <==
	I0520 10:21:34.475713       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 10:21:34.529877       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 10:21:34.529921       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 10:21:34.550852       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 10:21:34.552743       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-988376_59035872-a65e-4efe-89d4-c0546601a0a4!
	I0520 10:21:34.552792       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbf91ce5-9a14-4518-abcf-12e44aabf6b3", APIVersion:"v1", ResourceVersion:"518", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-988376_59035872-a65e-4efe-89d4-c0546601a0a4 became leader
	I0520 10:21:34.653284       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-988376_59035872-a65e-4efe-89d4-c0546601a0a4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-988376 -n addons-988376
helpers_test.go:261: (dbg) Run:  kubectl --context addons-988376 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (35.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (376.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-879853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-879853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: exit status 102 (6m12.958590078s)

                                                
                                                
-- stdout --
	* [old-k8s-version-879853] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-879853" primary control-plane node in "old-k8s-version-879853" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "old-k8s-version-879853" ...
	* Preparing Kubernetes v1.20.0 on Docker 26.1.2 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-879853 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 11:14:38.149610  294755 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:14:38.149766  294755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:14:38.149773  294755 out.go:304] Setting ErrFile to fd 2...
	I0520 11:14:38.149783  294755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:14:38.150036  294755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
	I0520 11:14:38.150387  294755 out.go:298] Setting JSON to false
	I0520 11:14:38.151399  294755 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3399,"bootTime":1716200280,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1061-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0520 11:14:38.151464  294755 start.go:139] virtualization:  
	I0520 11:14:38.154141  294755 out.go:177] * [old-k8s-version-879853] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 11:14:38.156555  294755 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:14:38.158198  294755 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:14:38.156598  294755 notify.go:220] Checking for updates...
	I0520 11:14:38.162025  294755 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	I0520 11:14:38.163640  294755 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	I0520 11:14:38.165877  294755 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 11:14:38.167634  294755 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:14:38.170042  294755 config.go:182] Loaded profile config "old-k8s-version-879853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 11:14:38.172341  294755 out.go:177] * Kubernetes 1.30.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.1
	I0520 11:14:38.174134  294755 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:14:38.199870  294755 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 11:14:38.199983  294755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 11:14:38.293889  294755 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-20 11:14:38.282980487 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 11:14:38.294060  294755 docker.go:295] overlay module found
	I0520 11:14:38.296183  294755 out.go:177] * Using the docker driver based on existing profile
	I0520 11:14:38.298029  294755 start.go:297] selected driver: docker
	I0520 11:14:38.298069  294755 start.go:901] validating driver "docker" against &{Name:old-k8s-version-879853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-879853 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:14:38.298212  294755 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:14:38.298945  294755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 11:14:38.385182  294755 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-20 11:14:38.364777123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 11:14:38.385527  294755 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:14:38.385548  294755 cni.go:84] Creating CNI manager for ""
	I0520 11:14:38.385560  294755 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 11:14:38.385595  294755 start.go:340] cluster config:
	{Name:old-k8s-version-879853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-879853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:14:38.387560  294755 out.go:177] * Starting "old-k8s-version-879853" primary control-plane node in "old-k8s-version-879853" cluster
	I0520 11:14:38.389172  294755 cache.go:121] Beginning downloading kic base image for docker with docker
	I0520 11:14:38.391191  294755 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0520 11:14:38.392760  294755 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 11:14:38.392811  294755 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 11:14:38.392819  294755 cache.go:56] Caching tarball of preloaded images
	I0520 11:14:38.392910  294755 preload.go:173] Found /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 11:14:38.392924  294755 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0520 11:14:38.393030  294755 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/config.json ...
	I0520 11:14:38.393268  294755 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0520 11:14:38.407859  294755 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0520 11:14:38.407892  294755 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0520 11:14:38.407912  294755 cache.go:194] Successfully downloaded all kic artifacts
	I0520 11:14:38.407940  294755 start.go:360] acquireMachinesLock for old-k8s-version-879853: {Name:mka4c1e89cfccad8376ed22de63d5c67fb2dc918 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:14:38.407998  294755 start.go:364] duration metric: took 37.53µs to acquireMachinesLock for "old-k8s-version-879853"
	I0520 11:14:38.408017  294755 start.go:96] Skipping create...Using existing machine configuration
	I0520 11:14:38.408022  294755 fix.go:54] fixHost starting: 
	I0520 11:14:38.408292  294755 cli_runner.go:164] Run: docker container inspect old-k8s-version-879853 --format={{.State.Status}}
	I0520 11:14:38.426529  294755 fix.go:112] recreateIfNeeded on old-k8s-version-879853: state=Stopped err=<nil>
	W0520 11:14:38.426569  294755 fix.go:138] unexpected machine state, will restart: <nil>
	I0520 11:14:38.428619  294755 out.go:177] * Restarting existing docker container for "old-k8s-version-879853" ...
	I0520 11:14:38.430350  294755 cli_runner.go:164] Run: docker start old-k8s-version-879853
	I0520 11:14:38.763122  294755 cli_runner.go:164] Run: docker container inspect old-k8s-version-879853 --format={{.State.Status}}
	I0520 11:14:38.794479  294755 kic.go:430] container "old-k8s-version-879853" state is running.
	I0520 11:14:38.796861  294755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-879853
	I0520 11:14:38.823289  294755 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/config.json ...
	I0520 11:14:38.823764  294755 machine.go:94] provisionDockerMachine start ...
	I0520 11:14:38.823846  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:38.843812  294755 main.go:141] libmachine: Using SSH client type: native
	I0520 11:14:38.844074  294755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0520 11:14:38.844091  294755 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:14:38.846212  294755 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0520 11:14:41.972756  294755 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-879853
	
	I0520 11:14:41.972797  294755 ubuntu.go:169] provisioning hostname "old-k8s-version-879853"
	I0520 11:14:41.972863  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:41.993898  294755 main.go:141] libmachine: Using SSH client type: native
	I0520 11:14:41.994142  294755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0520 11:14:41.994154  294755 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-879853 && echo "old-k8s-version-879853" | sudo tee /etc/hostname
	I0520 11:14:42.148643  294755 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-879853
	
	I0520 11:14:42.148777  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:42.174919  294755 main.go:141] libmachine: Using SSH client type: native
	I0520 11:14:42.175197  294755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0520 11:14:42.175220  294755 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-879853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-879853/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-879853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:14:42.310113  294755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:14:42.310227  294755 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18925-2151/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-2151/.minikube}
	I0520 11:14:42.310298  294755 ubuntu.go:177] setting up certificates
	I0520 11:14:42.310351  294755 provision.go:84] configureAuth start
	I0520 11:14:42.310456  294755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-879853
	I0520 11:14:42.332522  294755 provision.go:143] copyHostCerts
	I0520 11:14:42.332600  294755 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-2151/.minikube/cert.pem, removing ...
	I0520 11:14:42.332609  294755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-2151/.minikube/cert.pem
	I0520 11:14:42.332717  294755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-2151/.minikube/cert.pem (1123 bytes)
	I0520 11:14:42.332822  294755 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-2151/.minikube/key.pem, removing ...
	I0520 11:14:42.332827  294755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-2151/.minikube/key.pem
	I0520 11:14:42.332857  294755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-2151/.minikube/key.pem (1675 bytes)
	I0520 11:14:42.332907  294755 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-2151/.minikube/ca.pem, removing ...
	I0520 11:14:42.332918  294755 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-2151/.minikube/ca.pem
	I0520 11:14:42.332946  294755 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-2151/.minikube/ca.pem (1078 bytes)
	I0520 11:14:42.332996  294755 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-2151/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-879853 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-879853]
	I0520 11:14:43.164859  294755 provision.go:177] copyRemoteCerts
	I0520 11:14:43.164986  294755 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:14:43.165099  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:43.181688  294755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/old-k8s-version-879853/id_rsa Username:docker}
	I0520 11:14:43.274259  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:14:43.312265  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0520 11:14:43.358934  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0520 11:14:43.387371  294755 provision.go:87] duration metric: took 1.076988978s to configureAuth
	I0520 11:14:43.387399  294755 ubuntu.go:193] setting minikube options for container-runtime
	I0520 11:14:43.387592  294755 config.go:182] Loaded profile config "old-k8s-version-879853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 11:14:43.387659  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:43.410132  294755 main.go:141] libmachine: Using SSH client type: native
	I0520 11:14:43.410379  294755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0520 11:14:43.410392  294755 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 11:14:43.558364  294755 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0520 11:14:43.558435  294755 ubuntu.go:71] root file system type: overlay
	I0520 11:14:43.558584  294755 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 11:14:43.558721  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:43.577611  294755 main.go:141] libmachine: Using SSH client type: native
	I0520 11:14:43.577842  294755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0520 11:14:43.577916  294755 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 11:14:43.737847  294755 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 11:14:43.738001  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:43.762460  294755 main.go:141] libmachine: Using SSH client type: native
	I0520 11:14:43.762715  294755 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0520 11:14:43.762733  294755 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 11:14:43.922553  294755 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:14:43.922577  294755 machine.go:97] duration metric: took 5.098793916s to provisionDockerMachine
	I0520 11:14:43.922589  294755 start.go:293] postStartSetup for "old-k8s-version-879853" (driver="docker")
	I0520 11:14:43.922602  294755 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:14:43.922689  294755 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:14:43.922738  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:43.963912  294755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/old-k8s-version-879853/id_rsa Username:docker}
	I0520 11:14:44.074512  294755 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:14:44.081788  294755 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0520 11:14:44.081827  294755 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0520 11:14:44.081838  294755 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0520 11:14:44.081845  294755 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0520 11:14:44.081856  294755 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-2151/.minikube/addons for local assets ...
	I0520 11:14:44.081923  294755 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-2151/.minikube/files for local assets ...
	I0520 11:14:44.082036  294755 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-2151/.minikube/files/etc/ssl/certs/75122.pem -> 75122.pem in /etc/ssl/certs
	I0520 11:14:44.082157  294755 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:14:44.096442  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/files/etc/ssl/certs/75122.pem --> /etc/ssl/certs/75122.pem (1708 bytes)
	I0520 11:14:44.137022  294755 start.go:296] duration metric: took 214.417638ms for postStartSetup
	I0520 11:14:44.137174  294755 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 11:14:44.137222  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:44.163006  294755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/old-k8s-version-879853/id_rsa Username:docker}
	I0520 11:14:44.266358  294755 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0520 11:14:44.276384  294755 fix.go:56] duration metric: took 5.868353242s for fixHost
	I0520 11:14:44.276409  294755 start.go:83] releasing machines lock for "old-k8s-version-879853", held for 5.86840262s
	I0520 11:14:44.276483  294755 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-879853
	I0520 11:14:44.303865  294755 ssh_runner.go:195] Run: cat /version.json
	I0520 11:14:44.303921  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:44.304136  294755 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:14:44.304182  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:44.337737  294755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/old-k8s-version-879853/id_rsa Username:docker}
	I0520 11:14:44.341427  294755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/old-k8s-version-879853/id_rsa Username:docker}
	I0520 11:14:44.452504  294755 ssh_runner.go:195] Run: systemctl --version
	I0520 11:14:44.612245  294755 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 11:14:44.617910  294755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0520 11:14:44.657918  294755 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0520 11:14:44.658002  294755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0520 11:14:44.682548  294755 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0520 11:14:44.710011  294755 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0520 11:14:44.710040  294755 start.go:494] detecting cgroup driver to use...
	I0520 11:14:44.710072  294755 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0520 11:14:44.710172  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:14:44.750950  294755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0520 11:14:44.767009  294755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 11:14:44.783930  294755 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 11:14:44.784001  294755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 11:14:44.799805  294755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 11:14:44.815676  294755 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 11:14:44.835972  294755 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 11:14:44.849143  294755 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:14:44.867269  294755 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 11:14:44.884891  294755 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:14:44.899109  294755 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:14:44.912727  294755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:14:45.081288  294755 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 11:14:45.278405  294755 start.go:494] detecting cgroup driver to use...
	I0520 11:14:45.278464  294755 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0520 11:14:45.278520  294755 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 11:14:45.307693  294755 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0520 11:14:45.307818  294755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 11:14:45.334617  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:14:45.371776  294755 ssh_runner.go:195] Run: which cri-dockerd
	I0520 11:14:45.381415  294755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 11:14:45.406503  294755 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 11:14:45.449699  294755 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 11:14:45.621928  294755 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 11:14:45.786933  294755 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 11:14:45.787116  294755 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 11:14:45.829719  294755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:14:45.969534  294755 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 11:14:46.700388  294755 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 11:14:46.744511  294755 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 11:14:46.769782  294755 out.go:204] * Preparing Kubernetes v1.20.0 on Docker 26.1.2 ...
	I0520 11:14:46.769877  294755 cli_runner.go:164] Run: docker network inspect old-k8s-version-879853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0520 11:14:46.793309  294755 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0520 11:14:46.797062  294755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:14:46.815772  294755 kubeadm.go:877] updating cluster {Name:old-k8s-version-879853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-879853 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0520 11:14:46.815900  294755 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 11:14:46.815971  294755 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 11:14:46.857368  294755 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0520 11:14:46.857394  294755 docker.go:615] Images already preloaded, skipping extraction
	I0520 11:14:46.857460  294755 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0520 11:14:46.883373  294755 docker.go:685] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	registry.k8s.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	registry.k8s.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	registry.k8s.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	registry.k8s.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	registry.k8s.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	registry.k8s.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	registry.k8s.io/pause:3.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0520 11:14:46.883401  294755 cache_images.go:84] Images are preloaded, skipping loading
	I0520 11:14:46.883411  294755 kubeadm.go:928] updating node { 192.168.85.2 8443 v1.20.0 docker true true} ...
	I0520 11:14:46.883528  294755 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-879853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-879853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0520 11:14:46.883604  294755 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0520 11:14:46.976756  294755 cni.go:84] Creating CNI manager for ""
	I0520 11:14:46.976787  294755 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 11:14:46.976796  294755 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0520 11:14:46.976813  294755 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-879853 NodeName:old-k8s-version-879853 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0520 11:14:46.976958  294755 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-879853"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0520 11:14:46.977034  294755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0520 11:14:46.998422  294755 binaries.go:44] Found k8s binaries, skipping transfer
	I0520 11:14:46.998496  294755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0520 11:14:47.015899  294755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0520 11:14:47.051894  294755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0520 11:14:47.091437  294755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I0520 11:14:47.124301  294755 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0520 11:14:47.132155  294755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0520 11:14:47.148456  294755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:14:47.286490  294755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:14:47.309153  294755 certs.go:68] Setting up /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853 for IP: 192.168.85.2
	I0520 11:14:47.309172  294755 certs.go:194] generating shared ca certs ...
	I0520 11:14:47.309188  294755 certs.go:226] acquiring lock for ca certs: {Name:mka753a63b3bd30b9859f448573f70a0fd066da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:14:47.309324  294755 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18925-2151/.minikube/ca.key
	I0520 11:14:47.309377  294755 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18925-2151/.minikube/proxy-client-ca.key
	I0520 11:14:47.309385  294755 certs.go:256] generating profile certs ...
	I0520 11:14:47.309483  294755 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.key
	I0520 11:14:47.309548  294755 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/apiserver.key.7f7d5629
	I0520 11:14:47.309586  294755 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/proxy-client.key
	I0520 11:14:47.309688  294755 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/7512.pem (1338 bytes)
	W0520 11:14:47.309716  294755 certs.go:480] ignoring /home/jenkins/minikube-integration/18925-2151/.minikube/certs/7512_empty.pem, impossibly tiny 0 bytes
	I0520 11:14:47.309724  294755 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca-key.pem (1679 bytes)
	I0520 11:14:47.309751  294755 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem (1078 bytes)
	I0520 11:14:47.309774  294755 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/cert.pem (1123 bytes)
	I0520 11:14:47.309796  294755 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/key.pem (1675 bytes)
	I0520 11:14:47.309836  294755 certs.go:484] found cert: /home/jenkins/minikube-integration/18925-2151/.minikube/files/etc/ssl/certs/75122.pem (1708 bytes)
	I0520 11:14:47.310532  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0520 11:14:47.363005  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0520 11:14:47.411364  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0520 11:14:47.455231  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0520 11:14:47.533426  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0520 11:14:47.585383  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0520 11:14:47.641601  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0520 11:14:47.704397  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0520 11:14:47.770490  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/certs/7512.pem --> /usr/share/ca-certificates/7512.pem (1338 bytes)
	I0520 11:14:47.851530  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/files/etc/ssl/certs/75122.pem --> /usr/share/ca-certificates/75122.pem (1708 bytes)
	I0520 11:14:47.903002  294755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0520 11:14:47.938693  294755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0520 11:14:47.975427  294755 ssh_runner.go:195] Run: openssl version
	I0520 11:14:47.981806  294755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75122.pem && ln -fs /usr/share/ca-certificates/75122.pem /etc/ssl/certs/75122.pem"
	I0520 11:14:47.995316  294755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75122.pem
	I0520 11:14:47.998925  294755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 20 10:26 /usr/share/ca-certificates/75122.pem
	I0520 11:14:47.999019  294755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75122.pem
	I0520 11:14:48.008535  294755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75122.pem /etc/ssl/certs/3ec20f2e.0"
	I0520 11:14:48.019265  294755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0520 11:14:48.032213  294755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:48.038525  294755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 20 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:48.038632  294755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0520 11:14:48.049728  294755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0520 11:14:48.061130  294755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7512.pem && ln -fs /usr/share/ca-certificates/7512.pem /etc/ssl/certs/7512.pem"
	I0520 11:14:48.078853  294755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7512.pem
	I0520 11:14:48.083143  294755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 20 10:26 /usr/share/ca-certificates/7512.pem
	I0520 11:14:48.083240  294755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7512.pem
	I0520 11:14:48.091991  294755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7512.pem /etc/ssl/certs/51391683.0"
	I0520 11:14:48.105368  294755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0520 11:14:48.111031  294755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0520 11:14:48.123896  294755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0520 11:14:48.134275  294755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0520 11:14:48.147798  294755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0520 11:14:48.165906  294755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0520 11:14:48.178133  294755 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0520 11:14:48.189809  294755 kubeadm.go:391] StartCluster: {Name:old-k8s-version-879853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-879853 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:14:48.189974  294755 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0520 11:14:48.214594  294755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0520 11:14:48.227973  294755 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0520 11:14:48.227996  294755 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0520 11:14:48.228023  294755 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0520 11:14:48.228074  294755 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0520 11:14:48.239039  294755 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0520 11:14:48.239546  294755 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-879853" does not appear in /home/jenkins/minikube-integration/18925-2151/kubeconfig
	I0520 11:14:48.239682  294755 kubeconfig.go:62] /home/jenkins/minikube-integration/18925-2151/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-879853" cluster setting kubeconfig missing "old-k8s-version-879853" context setting]
	I0520 11:14:48.239990  294755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/kubeconfig: {Name:mk3d714476b7ca0e67bf2a31cd3b93dbb70011b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:14:48.241517  294755 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0520 11:14:48.257175  294755 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.85.2
	I0520 11:14:48.257218  294755 kubeadm.go:591] duration metric: took 29.188949ms to restartPrimaryControlPlane
	I0520 11:14:48.257228  294755 kubeadm.go:393] duration metric: took 67.429992ms to StartCluster
	I0520 11:14:48.257244  294755 settings.go:142] acquiring lock: {Name:mkf178671fce68e287b32051308c404994baee58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:14:48.257313  294755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18925-2151/kubeconfig
	I0520 11:14:48.257977  294755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/kubeconfig: {Name:mk3d714476b7ca0e67bf2a31cd3b93dbb70011b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:14:48.258183  294755 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 11:14:48.261002  294755 out.go:177] * Verifying Kubernetes components...
	I0520 11:14:48.258535  294755 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0520 11:14:48.258588  294755 config.go:182] Loaded profile config "old-k8s-version-879853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 11:14:48.262789  294755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:14:48.262908  294755 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-879853"
	I0520 11:14:48.262937  294755 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-879853"
	W0520 11:14:48.262948  294755 addons.go:243] addon storage-provisioner should already be in state true
	I0520 11:14:48.262980  294755 host.go:66] Checking if "old-k8s-version-879853" exists ...
	I0520 11:14:48.263058  294755 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-879853"
	I0520 11:14:48.263088  294755 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-879853"
	I0520 11:14:48.263379  294755 cli_runner.go:164] Run: docker container inspect old-k8s-version-879853 --format={{.State.Status}}
	I0520 11:14:48.263891  294755 cli_runner.go:164] Run: docker container inspect old-k8s-version-879853 --format={{.State.Status}}
	I0520 11:14:48.264116  294755 addons.go:69] Setting dashboard=true in profile "old-k8s-version-879853"
	I0520 11:14:48.264141  294755 addons.go:234] Setting addon dashboard=true in "old-k8s-version-879853"
	W0520 11:14:48.264148  294755 addons.go:243] addon dashboard should already be in state true
	I0520 11:14:48.264172  294755 host.go:66] Checking if "old-k8s-version-879853" exists ...
	I0520 11:14:48.264534  294755 cli_runner.go:164] Run: docker container inspect old-k8s-version-879853 --format={{.State.Status}}
	I0520 11:14:48.264778  294755 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-879853"
	I0520 11:14:48.264805  294755 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-879853"
	W0520 11:14:48.264811  294755 addons.go:243] addon metrics-server should already be in state true
	I0520 11:14:48.264852  294755 host.go:66] Checking if "old-k8s-version-879853" exists ...
	I0520 11:14:48.265321  294755 cli_runner.go:164] Run: docker container inspect old-k8s-version-879853 --format={{.State.Status}}
	I0520 11:14:48.334005  294755 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-879853"
	W0520 11:14:48.334031  294755 addons.go:243] addon default-storageclass should already be in state true
	I0520 11:14:48.334056  294755 host.go:66] Checking if "old-k8s-version-879853" exists ...
	I0520 11:14:48.334442  294755 cli_runner.go:164] Run: docker container inspect old-k8s-version-879853 --format={{.State.Status}}
	I0520 11:14:48.342419  294755 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0520 11:14:48.347026  294755 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0520 11:14:48.350537  294755 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0520 11:14:48.350550  294755 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:14:48.347059  294755 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0520 11:14:48.354202  294755 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0520 11:14:48.354224  294755 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0520 11:14:48.354300  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:48.352271  294755 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0520 11:14:48.357139  294755 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0520 11:14:48.357227  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:48.352285  294755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0520 11:14:48.369115  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:48.384221  294755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/old-k8s-version-879853/id_rsa Username:docker}
	I0520 11:14:48.409650  294755 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0520 11:14:48.409670  294755 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0520 11:14:48.409744  294755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-879853
	I0520 11:14:48.421154  294755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/old-k8s-version-879853/id_rsa Username:docker}
	I0520 11:14:48.436077  294755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/old-k8s-version-879853/id_rsa Username:docker}
	I0520 11:14:48.453191  294755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/old-k8s-version-879853/id_rsa Username:docker}
	I0520 11:14:48.504077  294755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0520 11:14:48.548903  294755 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-879853" to be "Ready" ...
	I0520 11:14:48.614190  294755 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0520 11:14:48.614261  294755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0520 11:14:48.659649  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:14:48.668221  294755 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0520 11:14:48.668246  294755 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0520 11:14:48.694863  294755 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0520 11:14:48.694890  294755 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0520 11:14:48.726695  294755 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:14:48.726723  294755 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0520 11:14:48.781273  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:14:48.830858  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:14:48.840164  294755 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0520 11:14:48.840192  294755 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0520 11:14:48.925792  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:48.925830  294755 retry.go:31] will retry after 308.975377ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:48.975700  294755 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0520 11:14:48.975747  294755 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0520 11:14:49.086878  294755 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0520 11:14:49.086904  294755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0520 11:14:49.126899  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:49.126936  294755 retry.go:31] will retry after 329.731046ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:14:49.154115  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:49.154149  294755 retry.go:31] will retry after 153.299281ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:49.156834  294755 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0520 11:14:49.156861  294755 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0520 11:14:49.183827  294755 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0520 11:14:49.183853  294755 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0520 11:14:49.219376  294755 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0520 11:14:49.219417  294755 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0520 11:14:49.235716  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:14:49.251906  294755 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0520 11:14:49.251931  294755 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0520 11:14:49.308186  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:14:49.344011  294755 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0520 11:14:49.344039  294755 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0520 11:14:49.404734  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0520 11:14:49.457086  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0520 11:14:49.534697  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:49.534732  294755 retry.go:31] will retry after 412.874952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:14:49.715513  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:49.715547  294755 retry.go:31] will retry after 509.636972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:14:49.738577  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:49.738613  294755 retry.go:31] will retry after 353.505529ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:14:49.738662  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:49.738680  294755 retry.go:31] will retry after 214.552777ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:49.948123  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:14:49.954408  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:14:50.093066  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0520 11:14:50.169258  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:50.169295  294755 retry.go:31] will retry after 434.334095ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:14:50.185590  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:50.185624  294755 retry.go:31] will retry after 821.79813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:50.225956  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0520 11:14:50.302518  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:50.302551  294755 retry.go:31] will retry after 378.237404ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:14:50.379564  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:50.379598  294755 retry.go:31] will retry after 416.67486ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:50.550248  294755 node_ready.go:53] error getting node "old-k8s-version-879853": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-879853": dial tcp 192.168.85.2:8443: connect: connection refused
	I0520 11:14:50.604519  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:14:50.681771  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0520 11:14:50.731279  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:50.731314  294755 retry.go:31] will retry after 1.198494969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:50.796616  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0520 11:14:50.803871  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:50.803907  294755 retry.go:31] will retry after 769.835859ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:14:50.901489  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:50.901522  294755 retry.go:31] will retry after 610.187241ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:51.007882  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0520 11:14:51.116137  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:51.116173  294755 retry.go:31] will retry after 1.048904201s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:51.512795  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:14:51.574798  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0520 11:14:51.689453  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:51.689489  294755 retry.go:31] will retry after 1.357849644s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:14:51.789124  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:51.789157  294755 retry.go:31] will retry after 1.078582453s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:51.930382  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:14:52.165350  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0520 11:14:52.313779  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:52.313857  294755 retry.go:31] will retry after 1.219367878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0520 11:14:52.558355  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:52.558440  294755 retry.go:31] will retry after 1.880649029s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:52.867963  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0520 11:14:53.047811  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0520 11:14:53.050455  294755 node_ready.go:53] error getting node "old-k8s-version-879853": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-879853": dial tcp 192.168.85.2:8443: connect: connection refused
	W0520 11:14:53.145248  294755 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:53.145396  294755 retry.go:31] will retry after 1.543297853s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0520 11:14:53.534437  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0520 11:14:54.439323  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0520 11:14:54.689277  294755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0520 11:15:01.974363  294755 node_ready.go:49] node "old-k8s-version-879853" has status "Ready":"True"
	I0520 11:15:01.974443  294755 node_ready.go:38] duration metric: took 13.425456199s for node "old-k8s-version-879853" to be "Ready" ...
	I0520 11:15:01.974471  294755 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:15:02.136374  294755 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-6l85w" in "kube-system" namespace to be "Ready" ...
	I0520 11:15:02.387308  294755 pod_ready.go:92] pod "coredns-74ff55c5b-6l85w" in "kube-system" namespace has status "Ready":"True"
	I0520 11:15:02.387337  294755 pod_ready.go:81] duration metric: took 250.925989ms for pod "coredns-74ff55c5b-6l85w" in "kube-system" namespace to be "Ready" ...
	I0520 11:15:02.387355  294755 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-879853" in "kube-system" namespace to be "Ready" ...
	I0520 11:15:02.461839  294755 pod_ready.go:92] pod "etcd-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"True"
	I0520 11:15:02.461868  294755 pod_ready.go:81] duration metric: took 74.503218ms for pod "etcd-old-k8s-version-879853" in "kube-system" namespace to be "Ready" ...
	I0520 11:15:02.461890  294755 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-879853" in "kube-system" namespace to be "Ready" ...
	I0520 11:15:03.873965  294755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.826051544s)
	I0520 11:15:03.874011  294755 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-879853"
	I0520 11:15:03.874073  294755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.33952515s)
	I0520 11:15:03.874108  294755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.434687595s)
	I0520 11:15:04.350851  294755 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.661467998s)
	I0520 11:15:04.353265  294755 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-879853 addons enable metrics-server
	
	I0520 11:15:04.356986  294755 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0520 11:15:04.359564  294755 addons.go:505] duration metric: took 16.101011285s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0520 11:15:04.468439  294755 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:06.468726  294755 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:08.968415  294755 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:10.470039  294755 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"True"
	I0520 11:15:10.470066  294755 pod_ready.go:81] duration metric: took 8.008169224s for pod "kube-apiserver-old-k8s-version-879853" in "kube-system" namespace to be "Ready" ...
	I0520 11:15:10.470081  294755 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace to be "Ready" ...
	I0520 11:15:12.477454  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:14.976467  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:16.976698  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:19.476582  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:21.480582  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:24.033019  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:26.477279  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:28.976687  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:30.977039  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:33.478150  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:35.975834  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:37.976315  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:39.976461  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:41.987233  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:44.476731  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:46.477040  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:48.480557  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:50.976599  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:53.476474  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:55.477704  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:15:57.976846  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:00.477339  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:02.978469  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:05.476651  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:07.478611  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:09.976431  294755 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:11.476771  294755 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"True"
	I0520 11:16:11.476793  294755 pod_ready.go:81] duration metric: took 1m1.006704556s for pod "kube-controller-manager-old-k8s-version-879853" in "kube-system" namespace to be "Ready" ...
	I0520 11:16:11.476805  294755 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2q9x5" in "kube-system" namespace to be "Ready" ...
	I0520 11:16:11.481811  294755 pod_ready.go:92] pod "kube-proxy-2q9x5" in "kube-system" namespace has status "Ready":"True"
	I0520 11:16:11.481899  294755 pod_ready.go:81] duration metric: took 5.086025ms for pod "kube-proxy-2q9x5" in "kube-system" namespace to be "Ready" ...
	I0520 11:16:11.481925  294755 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-879853" in "kube-system" namespace to be "Ready" ...
	I0520 11:16:13.488164  294755 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:15.988397  294755 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:17.988589  294755 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:20.489534  294755 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:22.490060  294755 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:24.995217  294755 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:27.490021  294755 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:28.489192  294755 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-879853" in "kube-system" namespace has status "Ready":"True"
	I0520 11:16:28.489216  294755 pod_ready.go:81] duration metric: took 17.007271303s for pod "kube-scheduler-old-k8s-version-879853" in "kube-system" namespace to be "Ready" ...
	I0520 11:16:28.489228  294755 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace to be "Ready" ...
	I0520 11:16:30.495430  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:32.995592  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:34.996017  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:37.497013  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:39.498302  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:41.995921  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:44.498218  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:46.995687  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:49.496242  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:51.995169  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:53.995865  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:56.495754  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:16:58.497968  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:00.996752  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:03.498161  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:05.996439  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:08.496339  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:10.995517  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:12.996073  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:15.495606  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:17.495997  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:19.997165  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:22.495117  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:24.495743  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:26.496364  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:29.001717  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:31.496207  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:33.995980  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:36.495640  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:38.497133  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:40.995996  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:43.495580  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:45.995233  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:47.995681  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:50.495571  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:52.994726  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:54.995108  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:56.995252  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:17:58.996143  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:01.497842  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:03.995642  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:06.495345  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:08.495501  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:10.995892  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:13.061686  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:15.500917  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:17.501354  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:19.995724  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:21.995993  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:24.495949  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:26.498096  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:28.995223  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:30.995313  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:32.995540  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:34.995987  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:37.496428  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:39.995383  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:41.995875  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:43.995964  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:46.495826  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:48.995194  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:50.996018  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:53.495780  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:55.496397  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:57.995423  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:18:59.995833  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:01.996017  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:03.996344  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:05.997376  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:08.495788  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:10.995882  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:13.498460  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:15.994678  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:17.995305  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:19.996412  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:22.496151  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:24.998582  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:27.495478  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:29.496118  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:31.995335  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:34.000398  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:36.495592  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:38.496021  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:40.995696  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:43.495106  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:45.495475  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:47.495534  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:49.496076  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:51.995248  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:54.017406  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:56.495920  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:19:58.995888  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:00.996070  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:03.496253  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:05.995460  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:07.996020  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:10.496880  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:12.995169  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:14.995588  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:16.995958  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:19.497850  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:21.995197  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:23.995754  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:25.996122  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:28.496567  294755 pod_ready.go:102] pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace has status "Ready":"False"
	I0520 11:20:28.496601  294755 pod_ready.go:81] duration metric: took 4m0.007365062s for pod "metrics-server-9975d5f86-24f8f" in "kube-system" namespace to be "Ready" ...
	E0520 11:20:28.496612  294755 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0520 11:20:28.496620  294755 pod_ready.go:38] duration metric: took 5m26.522082822s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0520 11:20:28.496639  294755 api_server.go:52] waiting for apiserver process to appear ...
	I0520 11:20:28.496720  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 11:20:28.514066  294755 logs.go:276] 2 containers: [40bd562563eb ac483fa044f2]
	I0520 11:20:28.514149  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 11:20:28.534721  294755 logs.go:276] 2 containers: [730cfcbb3b45 55da4e11e773]
	I0520 11:20:28.534812  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 11:20:28.560414  294755 logs.go:276] 2 containers: [005cb5179890 ae2339caab7b]
	I0520 11:20:28.560501  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 11:20:28.576816  294755 logs.go:276] 2 containers: [e881ad64ded6 d7bf0c086e3a]
	I0520 11:20:28.576962  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 11:20:28.598222  294755 logs.go:276] 2 containers: [d282e88b488b b1eddd083512]
	I0520 11:20:28.598348  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 11:20:28.619563  294755 logs.go:276] 2 containers: [c8021e0d3603 33738d35cca5]
	I0520 11:20:28.619674  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 11:20:28.635999  294755 logs.go:276] 0 containers: []
	W0520 11:20:28.636059  294755 logs.go:278] No container was found matching "kindnet"
	I0520 11:20:28.636141  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0520 11:20:28.654503  294755 logs.go:276] 1 containers: [1963b29e2abd]
	I0520 11:20:28.654587  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 11:20:28.682586  294755 logs.go:276] 2 containers: [2b538e03219d 7d536cdf43d4]
	I0520 11:20:28.682632  294755 logs.go:123] Gathering logs for kube-apiserver [ac483fa044f2] ...
	I0520 11:20:28.682644  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac483fa044f2"
	I0520 11:20:28.767604  294755 logs.go:123] Gathering logs for kube-scheduler [e881ad64ded6] ...
	I0520 11:20:28.767638  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e881ad64ded6"
	I0520 11:20:28.791207  294755 logs.go:123] Gathering logs for kube-proxy [d282e88b488b] ...
	I0520 11:20:28.791236  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d282e88b488b"
	I0520 11:20:28.813274  294755 logs.go:123] Gathering logs for kube-apiserver [40bd562563eb] ...
	I0520 11:20:28.813303  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40bd562563eb"
	I0520 11:20:28.858057  294755 logs.go:123] Gathering logs for coredns [005cb5179890] ...
	I0520 11:20:28.858090  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 005cb5179890"
	I0520 11:20:28.879290  294755 logs.go:123] Gathering logs for kube-proxy [b1eddd083512] ...
	I0520 11:20:28.879324  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1eddd083512"
	I0520 11:20:28.902011  294755 logs.go:123] Gathering logs for kube-controller-manager [c8021e0d3603] ...
	I0520 11:20:28.902039  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8021e0d3603"
	I0520 11:20:28.977340  294755 logs.go:123] Gathering logs for kubernetes-dashboard [1963b29e2abd] ...
	I0520 11:20:28.977380  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1963b29e2abd"
	I0520 11:20:29.006214  294755 logs.go:123] Gathering logs for container status ...
	I0520 11:20:29.006243  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:20:29.079308  294755 logs.go:123] Gathering logs for dmesg ...
	I0520 11:20:29.079342  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:20:29.098885  294755 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:20:29.098917  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:20:29.281599  294755 logs.go:123] Gathering logs for coredns [ae2339caab7b] ...
	I0520 11:20:29.281628  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2339caab7b"
	I0520 11:20:29.307349  294755 logs.go:123] Gathering logs for kube-scheduler [d7bf0c086e3a] ...
	I0520 11:20:29.307375  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bf0c086e3a"
	I0520 11:20:29.346554  294755 logs.go:123] Gathering logs for kube-controller-manager [33738d35cca5] ...
	I0520 11:20:29.346582  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33738d35cca5"
	I0520 11:20:29.394397  294755 logs.go:123] Gathering logs for kubelet ...
	I0520 11:20:29.394426  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 11:20:29.455360  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.784714    1232 reflector.go:138] object-"kube-system"/"coredns-token-k8wk7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-k8wk7" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:29.456626  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.784913    1232 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:29.456914  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785143    1232 reflector.go:138] object-"kube-system"/"storage-provisioner-token-8x5hz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-8x5hz" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:29.457218  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785319    1232 reflector.go:138] object-"kube-system"/"metrics-server-token-4g8dg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-4g8dg" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:29.457435  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785509    1232 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:29.457665  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785712    1232 reflector.go:138] object-"kube-system"/"kube-proxy-token-4lffw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-4lffw" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:29.457885  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785913    1232 reflector.go:138] object-"default"/"default-token-44xhs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-44xhs" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:29.464100  294755 logs.go:138] Found kubelet problem: May 20 11:15:03 old-k8s-version-879853 kubelet[1232]: E0520 11:15:03.901214    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:29.464767  294755 logs.go:138] Found kubelet problem: May 20 11:15:04 old-k8s-version-879853 kubelet[1232]: E0520 11:15:04.328349    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.467914  294755 logs.go:138] Found kubelet problem: May 20 11:15:15 old-k8s-version-879853 kubelet[1232]: E0520 11:15:15.273676    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:29.472627  294755 logs.go:138] Found kubelet problem: May 20 11:15:22 old-k8s-version-879853 kubelet[1232]: E0520 11:15:22.024884    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:29.473022  294755 logs.go:138] Found kubelet problem: May 20 11:15:22 old-k8s-version-879853 kubelet[1232]: E0520 11:15:22.730051    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.473397  294755 logs.go:138] Found kubelet problem: May 20 11:15:28 old-k8s-version-879853 kubelet[1232]: E0520 11:15:28.228108    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.476024  294755 logs.go:138] Found kubelet problem: May 20 11:15:34 old-k8s-version-879853 kubelet[1232]: E0520 11:15:34.744287    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:29.476475  294755 logs.go:138] Found kubelet problem: May 20 11:15:34 old-k8s-version-879853 kubelet[1232]: E0520 11:15:34.859501    1232 pod_workers.go:191] Error syncing pod b1d644d9-1159-48f3-95ed-e9f20098fc95 ("storage-provisioner_kube-system(b1d644d9-1159-48f3-95ed-e9f20098fc95)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b1d644d9-1159-48f3-95ed-e9f20098fc95)"
	W0520 11:20:29.478582  294755 logs.go:138] Found kubelet problem: May 20 11:15:39 old-k8s-version-879853 kubelet[1232]: E0520 11:15:39.300402    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:29.479258  294755 logs.go:138] Found kubelet problem: May 20 11:15:49 old-k8s-version-879853 kubelet[1232]: E0520 11:15:49.228221    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.479447  294755 logs.go:138] Found kubelet problem: May 20 11:15:51 old-k8s-version-879853 kubelet[1232]: E0520 11:15:51.266478    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.481714  294755 logs.go:138] Found kubelet problem: May 20 11:16:01 old-k8s-version-879853 kubelet[1232]: E0520 11:16:01.699755    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:29.481905  294755 logs.go:138] Found kubelet problem: May 20 11:16:03 old-k8s-version-879853 kubelet[1232]: E0520 11:16:03.235985    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.482106  294755 logs.go:138] Found kubelet problem: May 20 11:16:14 old-k8s-version-879853 kubelet[1232]: E0520 11:16:14.227928    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.482294  294755 logs.go:138] Found kubelet problem: May 20 11:16:15 old-k8s-version-879853 kubelet[1232]: E0520 11:16:15.228541    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.484386  294755 logs.go:138] Found kubelet problem: May 20 11:16:27 old-k8s-version-879853 kubelet[1232]: E0520 11:16:27.259598    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:29.484587  294755 logs.go:138] Found kubelet problem: May 20 11:16:29 old-k8s-version-879853 kubelet[1232]: E0520 11:16:29.239845    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.484774  294755 logs.go:138] Found kubelet problem: May 20 11:16:39 old-k8s-version-879853 kubelet[1232]: E0520 11:16:39.229695    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.487096  294755 logs.go:138] Found kubelet problem: May 20 11:16:42 old-k8s-version-879853 kubelet[1232]: E0520 11:16:42.700115    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:29.487291  294755 logs.go:138] Found kubelet problem: May 20 11:16:51 old-k8s-version-879853 kubelet[1232]: E0520 11:16:51.246627    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.487498  294755 logs.go:138] Found kubelet problem: May 20 11:16:54 old-k8s-version-879853 kubelet[1232]: E0520 11:16:54.228244    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.487700  294755 logs.go:138] Found kubelet problem: May 20 11:17:06 old-k8s-version-879853 kubelet[1232]: E0520 11:17:06.228973    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.487903  294755 logs.go:138] Found kubelet problem: May 20 11:17:06 old-k8s-version-879853 kubelet[1232]: E0520 11:17:06.229350    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.488103  294755 logs.go:138] Found kubelet problem: May 20 11:17:20 old-k8s-version-879853 kubelet[1232]: E0520 11:17:20.228064    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.488291  294755 logs.go:138] Found kubelet problem: May 20 11:17:21 old-k8s-version-879853 kubelet[1232]: E0520 11:17:21.228613    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.488482  294755 logs.go:138] Found kubelet problem: May 20 11:17:32 old-k8s-version-879853 kubelet[1232]: E0520 11:17:32.228111    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.488693  294755 logs.go:138] Found kubelet problem: May 20 11:17:34 old-k8s-version-879853 kubelet[1232]: E0520 11:17:34.243800    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.488880  294755 logs.go:138] Found kubelet problem: May 20 11:17:46 old-k8s-version-879853 kubelet[1232]: E0520 11:17:46.228054    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.489086  294755 logs.go:138] Found kubelet problem: May 20 11:17:48 old-k8s-version-879853 kubelet[1232]: E0520 11:17:48.228327    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.489289  294755 logs.go:138] Found kubelet problem: May 20 11:18:00 old-k8s-version-879853 kubelet[1232]: E0520 11:18:00.229869    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.491404  294755 logs.go:138] Found kubelet problem: May 20 11:18:00 old-k8s-version-879853 kubelet[1232]: E0520 11:18:00.274754    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:29.493704  294755 logs.go:138] Found kubelet problem: May 20 11:18:11 old-k8s-version-879853 kubelet[1232]: E0520 11:18:11.688233    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:29.493898  294755 logs.go:138] Found kubelet problem: May 20 11:18:15 old-k8s-version-879853 kubelet[1232]: E0520 11:18:15.228362    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.494099  294755 logs.go:138] Found kubelet problem: May 20 11:18:23 old-k8s-version-879853 kubelet[1232]: E0520 11:18:23.238275    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.494287  294755 logs.go:138] Found kubelet problem: May 20 11:18:26 old-k8s-version-879853 kubelet[1232]: E0520 11:18:26.228357    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.494489  294755 logs.go:138] Found kubelet problem: May 20 11:18:34 old-k8s-version-879853 kubelet[1232]: E0520 11:18:34.227935    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.494675  294755 logs.go:138] Found kubelet problem: May 20 11:18:40 old-k8s-version-879853 kubelet[1232]: E0520 11:18:40.228209    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.494877  294755 logs.go:138] Found kubelet problem: May 20 11:18:49 old-k8s-version-879853 kubelet[1232]: E0520 11:18:49.235833    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.495063  294755 logs.go:138] Found kubelet problem: May 20 11:18:51 old-k8s-version-879853 kubelet[1232]: E0520 11:18:51.228282    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.495264  294755 logs.go:138] Found kubelet problem: May 20 11:19:03 old-k8s-version-879853 kubelet[1232]: E0520 11:19:03.230228    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.495454  294755 logs.go:138] Found kubelet problem: May 20 11:19:06 old-k8s-version-879853 kubelet[1232]: E0520 11:19:06.228367    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.495654  294755 logs.go:138] Found kubelet problem: May 20 11:19:15 old-k8s-version-879853 kubelet[1232]: E0520 11:19:15.232306    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.495843  294755 logs.go:138] Found kubelet problem: May 20 11:19:19 old-k8s-version-879853 kubelet[1232]: E0520 11:19:19.228241    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.496043  294755 logs.go:138] Found kubelet problem: May 20 11:19:26 old-k8s-version-879853 kubelet[1232]: E0520 11:19:26.228031    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.496232  294755 logs.go:138] Found kubelet problem: May 20 11:19:30 old-k8s-version-879853 kubelet[1232]: E0520 11:19:30.228171    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.496434  294755 logs.go:138] Found kubelet problem: May 20 11:19:40 old-k8s-version-879853 kubelet[1232]: E0520 11:19:40.228690    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.496622  294755 logs.go:138] Found kubelet problem: May 20 11:19:41 old-k8s-version-879853 kubelet[1232]: E0520 11:19:41.228364    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.496825  294755 logs.go:138] Found kubelet problem: May 20 11:19:53 old-k8s-version-879853 kubelet[1232]: E0520 11:19:53.236437    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.497013  294755 logs.go:138] Found kubelet problem: May 20 11:19:56 old-k8s-version-879853 kubelet[1232]: E0520 11:19:56.228010    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.497220  294755 logs.go:138] Found kubelet problem: May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.228563    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.497413  294755 logs.go:138] Found kubelet problem: May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.229095    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.497613  294755 logs.go:138] Found kubelet problem: May 20 11:20:21 old-k8s-version-879853 kubelet[1232]: E0520 11:20:21.228588    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.497806  294755 logs.go:138] Found kubelet problem: May 20 11:20:23 old-k8s-version-879853 kubelet[1232]: E0520 11:20:23.230344    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0520 11:20:29.497816  294755 logs.go:123] Gathering logs for etcd [55da4e11e773] ...
	I0520 11:20:29.497830  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55da4e11e773"
	I0520 11:20:29.526500  294755 logs.go:123] Gathering logs for storage-provisioner [2b538e03219d] ...
	I0520 11:20:29.526531  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b538e03219d"
	I0520 11:20:29.550334  294755 logs.go:123] Gathering logs for storage-provisioner [7d536cdf43d4] ...
	I0520 11:20:29.550365  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d536cdf43d4"
	I0520 11:20:29.571229  294755 logs.go:123] Gathering logs for Docker ...
	I0520 11:20:29.571255  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 11:20:29.599827  294755 logs.go:123] Gathering logs for etcd [730cfcbb3b45] ...
	I0520 11:20:29.599866  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 730cfcbb3b45"
	I0520 11:20:29.629989  294755 out.go:304] Setting ErrFile to fd 2...
	I0520 11:20:29.630023  294755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 11:20:29.630098  294755 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0520 11:20:29.630108  294755 out.go:239]   May 20 11:19:56 old-k8s-version-879853 kubelet[1232]: E0520 11:19:56.228010    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 20 11:19:56 old-k8s-version-879853 kubelet[1232]: E0520 11:19:56.228010    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.630117  294755 out.go:239]   May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.228563    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.228563    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.630142  294755 out.go:239]   May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.229095    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.229095    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.630150  294755 out.go:239]   May 20 11:20:21 old-k8s-version-879853 kubelet[1232]: E0520 11:20:21.228588    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 20 11:20:21 old-k8s-version-879853 kubelet[1232]: E0520 11:20:21.228588    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:29.630160  294755 out.go:239]   May 20 11:20:23 old-k8s-version-879853 kubelet[1232]: E0520 11:20:23.230344    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 20 11:20:23 old-k8s-version-879853 kubelet[1232]: E0520 11:20:23.230344    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0520 11:20:29.630166  294755 out.go:304] Setting ErrFile to fd 2...
	I0520 11:20:29.630177  294755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:20:39.631205  294755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:20:39.646682  294755 api_server.go:72] duration metric: took 5m51.388454699s to wait for apiserver process to appear ...
	I0520 11:20:39.646717  294755 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:20:39.646814  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 11:20:39.681446  294755 logs.go:276] 2 containers: [40bd562563eb ac483fa044f2]
	I0520 11:20:39.681529  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 11:20:39.715894  294755 logs.go:276] 2 containers: [730cfcbb3b45 55da4e11e773]
	I0520 11:20:39.715965  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 11:20:39.737234  294755 logs.go:276] 2 containers: [005cb5179890 ae2339caab7b]
	I0520 11:20:39.737312  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 11:20:39.759250  294755 logs.go:276] 2 containers: [e881ad64ded6 d7bf0c086e3a]
	I0520 11:20:39.759327  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 11:20:39.783251  294755 logs.go:276] 2 containers: [d282e88b488b b1eddd083512]
	I0520 11:20:39.783338  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 11:20:39.804712  294755 logs.go:276] 2 containers: [c8021e0d3603 33738d35cca5]
	I0520 11:20:39.804798  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 11:20:39.828938  294755 logs.go:276] 0 containers: []
	W0520 11:20:39.828962  294755 logs.go:278] No container was found matching "kindnet"
	I0520 11:20:39.829022  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0520 11:20:39.848394  294755 logs.go:276] 1 containers: [1963b29e2abd]
	I0520 11:20:39.848472  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 11:20:39.872029  294755 logs.go:276] 2 containers: [2b538e03219d 7d536cdf43d4]
	I0520 11:20:39.872065  294755 logs.go:123] Gathering logs for etcd [55da4e11e773] ...
	I0520 11:20:39.872078  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55da4e11e773"
	I0520 11:20:39.900184  294755 logs.go:123] Gathering logs for kube-scheduler [d7bf0c086e3a] ...
	I0520 11:20:39.900222  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bf0c086e3a"
	I0520 11:20:39.940252  294755 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:20:39.940281  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:20:40.136266  294755 logs.go:123] Gathering logs for kube-apiserver [40bd562563eb] ...
	I0520 11:20:40.136296  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40bd562563eb"
	I0520 11:20:40.199886  294755 logs.go:123] Gathering logs for kube-apiserver [ac483fa044f2] ...
	I0520 11:20:40.199922  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac483fa044f2"
	I0520 11:20:40.291446  294755 logs.go:123] Gathering logs for etcd [730cfcbb3b45] ...
	I0520 11:20:40.291524  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 730cfcbb3b45"
	I0520 11:20:40.324901  294755 logs.go:123] Gathering logs for coredns [005cb5179890] ...
	I0520 11:20:40.324983  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 005cb5179890"
	I0520 11:20:40.355094  294755 logs.go:123] Gathering logs for kube-proxy [d282e88b488b] ...
	I0520 11:20:40.355120  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d282e88b488b"
	I0520 11:20:40.388428  294755 logs.go:123] Gathering logs for kube-proxy [b1eddd083512] ...
	I0520 11:20:40.388455  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1eddd083512"
	I0520 11:20:40.418991  294755 logs.go:123] Gathering logs for kubernetes-dashboard [1963b29e2abd] ...
	I0520 11:20:40.419015  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1963b29e2abd"
	I0520 11:20:40.446676  294755 logs.go:123] Gathering logs for kubelet ...
	I0520 11:20:40.446703  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 11:20:40.506686  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.784714    1232 reflector.go:138] object-"kube-system"/"coredns-token-k8wk7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-k8wk7" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.506930  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.784913    1232 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.507190  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785143    1232 reflector.go:138] object-"kube-system"/"storage-provisioner-token-8x5hz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-8x5hz" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.507460  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785319    1232 reflector.go:138] object-"kube-system"/"metrics-server-token-4g8dg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-4g8dg" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.507746  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785509    1232 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.507993  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785712    1232 reflector.go:138] object-"kube-system"/"kube-proxy-token-4lffw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-4lffw" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.508251  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785913    1232 reflector.go:138] object-"default"/"default-token-44xhs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-44xhs" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.514695  294755 logs.go:138] Found kubelet problem: May 20 11:15:03 old-k8s-version-879853 kubelet[1232]: E0520 11:15:03.901214    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:40.515477  294755 logs.go:138] Found kubelet problem: May 20 11:15:04 old-k8s-version-879853 kubelet[1232]: E0520 11:15:04.328349    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.518645  294755 logs.go:138] Found kubelet problem: May 20 11:15:15 old-k8s-version-879853 kubelet[1232]: E0520 11:15:15.273676    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:40.523468  294755 logs.go:138] Found kubelet problem: May 20 11:15:22 old-k8s-version-879853 kubelet[1232]: E0520 11:15:22.024884    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:40.523930  294755 logs.go:138] Found kubelet problem: May 20 11:15:22 old-k8s-version-879853 kubelet[1232]: E0520 11:15:22.730051    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.524327  294755 logs.go:138] Found kubelet problem: May 20 11:15:28 old-k8s-version-879853 kubelet[1232]: E0520 11:15:28.228108    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.529134  294755 logs.go:138] Found kubelet problem: May 20 11:15:34 old-k8s-version-879853 kubelet[1232]: E0520 11:15:34.744287    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:40.529709  294755 logs.go:138] Found kubelet problem: May 20 11:15:34 old-k8s-version-879853 kubelet[1232]: E0520 11:15:34.859501    1232 pod_workers.go:191] Error syncing pod b1d644d9-1159-48f3-95ed-e9f20098fc95 ("storage-provisioner_kube-system(b1d644d9-1159-48f3-95ed-e9f20098fc95)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b1d644d9-1159-48f3-95ed-e9f20098fc95)"
	W0520 11:20:40.531951  294755 logs.go:138] Found kubelet problem: May 20 11:15:39 old-k8s-version-879853 kubelet[1232]: E0520 11:15:39.300402    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:40.532631  294755 logs.go:138] Found kubelet problem: May 20 11:15:49 old-k8s-version-879853 kubelet[1232]: E0520 11:15:49.228221    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.532816  294755 logs.go:138] Found kubelet problem: May 20 11:15:51 old-k8s-version-879853 kubelet[1232]: E0520 11:15:51.266478    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.535296  294755 logs.go:138] Found kubelet problem: May 20 11:16:01 old-k8s-version-879853 kubelet[1232]: E0520 11:16:01.699755    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:40.535494  294755 logs.go:138] Found kubelet problem: May 20 11:16:03 old-k8s-version-879853 kubelet[1232]: E0520 11:16:03.235985    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.535692  294755 logs.go:138] Found kubelet problem: May 20 11:16:14 old-k8s-version-879853 kubelet[1232]: E0520 11:16:14.227928    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.535875  294755 logs.go:138] Found kubelet problem: May 20 11:16:15 old-k8s-version-879853 kubelet[1232]: E0520 11:16:15.228541    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.538290  294755 logs.go:138] Found kubelet problem: May 20 11:16:27 old-k8s-version-879853 kubelet[1232]: E0520 11:16:27.259598    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:40.538542  294755 logs.go:138] Found kubelet problem: May 20 11:16:29 old-k8s-version-879853 kubelet[1232]: E0520 11:16:29.239845    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.538762  294755 logs.go:138] Found kubelet problem: May 20 11:16:39 old-k8s-version-879853 kubelet[1232]: E0520 11:16:39.229695    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.541076  294755 logs.go:138] Found kubelet problem: May 20 11:16:42 old-k8s-version-879853 kubelet[1232]: E0520 11:16:42.700115    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:40.541296  294755 logs.go:138] Found kubelet problem: May 20 11:16:51 old-k8s-version-879853 kubelet[1232]: E0520 11:16:51.246627    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.541529  294755 logs.go:138] Found kubelet problem: May 20 11:16:54 old-k8s-version-879853 kubelet[1232]: E0520 11:16:54.228244    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.541751  294755 logs.go:138] Found kubelet problem: May 20 11:17:06 old-k8s-version-879853 kubelet[1232]: E0520 11:17:06.228973    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.542010  294755 logs.go:138] Found kubelet problem: May 20 11:17:06 old-k8s-version-879853 kubelet[1232]: E0520 11:17:06.229350    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.542236  294755 logs.go:138] Found kubelet problem: May 20 11:17:20 old-k8s-version-879853 kubelet[1232]: E0520 11:17:20.228064    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.542446  294755 logs.go:138] Found kubelet problem: May 20 11:17:21 old-k8s-version-879853 kubelet[1232]: E0520 11:17:21.228613    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.542656  294755 logs.go:138] Found kubelet problem: May 20 11:17:32 old-k8s-version-879853 kubelet[1232]: E0520 11:17:32.228111    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.542881  294755 logs.go:138] Found kubelet problem: May 20 11:17:34 old-k8s-version-879853 kubelet[1232]: E0520 11:17:34.243800    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.543091  294755 logs.go:138] Found kubelet problem: May 20 11:17:46 old-k8s-version-879853 kubelet[1232]: E0520 11:17:46.228054    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.543315  294755 logs.go:138] Found kubelet problem: May 20 11:17:48 old-k8s-version-879853 kubelet[1232]: E0520 11:17:48.228327    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.543539  294755 logs.go:138] Found kubelet problem: May 20 11:18:00 old-k8s-version-879853 kubelet[1232]: E0520 11:18:00.229869    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.545695  294755 logs.go:138] Found kubelet problem: May 20 11:18:00 old-k8s-version-879853 kubelet[1232]: E0520 11:18:00.274754    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:40.548194  294755 logs.go:138] Found kubelet problem: May 20 11:18:11 old-k8s-version-879853 kubelet[1232]: E0520 11:18:11.688233    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:40.548446  294755 logs.go:138] Found kubelet problem: May 20 11:18:15 old-k8s-version-879853 kubelet[1232]: E0520 11:18:15.228362    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.548678  294755 logs.go:138] Found kubelet problem: May 20 11:18:23 old-k8s-version-879853 kubelet[1232]: E0520 11:18:23.238275    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.548892  294755 logs.go:138] Found kubelet problem: May 20 11:18:26 old-k8s-version-879853 kubelet[1232]: E0520 11:18:26.228357    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.549140  294755 logs.go:138] Found kubelet problem: May 20 11:18:34 old-k8s-version-879853 kubelet[1232]: E0520 11:18:34.227935    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.549355  294755 logs.go:138] Found kubelet problem: May 20 11:18:40 old-k8s-version-879853 kubelet[1232]: E0520 11:18:40.228209    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.549599  294755 logs.go:138] Found kubelet problem: May 20 11:18:49 old-k8s-version-879853 kubelet[1232]: E0520 11:18:49.235833    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.549810  294755 logs.go:138] Found kubelet problem: May 20 11:18:51 old-k8s-version-879853 kubelet[1232]: E0520 11:18:51.228282    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.550043  294755 logs.go:138] Found kubelet problem: May 20 11:19:03 old-k8s-version-879853 kubelet[1232]: E0520 11:19:03.230228    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.550286  294755 logs.go:138] Found kubelet problem: May 20 11:19:06 old-k8s-version-879853 kubelet[1232]: E0520 11:19:06.228367    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.550514  294755 logs.go:138] Found kubelet problem: May 20 11:19:15 old-k8s-version-879853 kubelet[1232]: E0520 11:19:15.232306    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.550729  294755 logs.go:138] Found kubelet problem: May 20 11:19:19 old-k8s-version-879853 kubelet[1232]: E0520 11:19:19.228241    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.550957  294755 logs.go:138] Found kubelet problem: May 20 11:19:26 old-k8s-version-879853 kubelet[1232]: E0520 11:19:26.228031    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.551289  294755 logs.go:138] Found kubelet problem: May 20 11:19:30 old-k8s-version-879853 kubelet[1232]: E0520 11:19:30.228171    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.551527  294755 logs.go:138] Found kubelet problem: May 20 11:19:40 old-k8s-version-879853 kubelet[1232]: E0520 11:19:40.228690    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.551738  294755 logs.go:138] Found kubelet problem: May 20 11:19:41 old-k8s-version-879853 kubelet[1232]: E0520 11:19:41.228364    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.551963  294755 logs.go:138] Found kubelet problem: May 20 11:19:53 old-k8s-version-879853 kubelet[1232]: E0520 11:19:53.236437    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.552217  294755 logs.go:138] Found kubelet problem: May 20 11:19:56 old-k8s-version-879853 kubelet[1232]: E0520 11:19:56.228010    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.552510  294755 logs.go:138] Found kubelet problem: May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.228563    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.552730  294755 logs.go:138] Found kubelet problem: May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.229095    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.552958  294755 logs.go:138] Found kubelet problem: May 20 11:20:21 old-k8s-version-879853 kubelet[1232]: E0520 11:20:21.228588    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.553181  294755 logs.go:138] Found kubelet problem: May 20 11:20:23 old-k8s-version-879853 kubelet[1232]: E0520 11:20:23.230344    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.553410  294755 logs.go:138] Found kubelet problem: May 20 11:20:32 old-k8s-version-879853 kubelet[1232]: E0520 11:20:32.228101    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.553628  294755 logs.go:138] Found kubelet problem: May 20 11:20:38 old-k8s-version-879853 kubelet[1232]: E0520 11:20:38.228264    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0520 11:20:40.553657  294755 logs.go:123] Gathering logs for container status ...
	I0520 11:20:40.553689  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:20:40.647578  294755 logs.go:123] Gathering logs for storage-provisioner [7d536cdf43d4] ...
	I0520 11:20:40.647718  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d536cdf43d4"
	I0520 11:20:40.701383  294755 logs.go:123] Gathering logs for storage-provisioner [2b538e03219d] ...
	I0520 11:20:40.701409  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b538e03219d"
	I0520 11:20:40.737898  294755 logs.go:123] Gathering logs for coredns [ae2339caab7b] ...
	I0520 11:20:40.737935  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2339caab7b"
	I0520 11:20:40.769964  294755 logs.go:123] Gathering logs for kube-scheduler [e881ad64ded6] ...
	I0520 11:20:40.770002  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e881ad64ded6"
	I0520 11:20:40.795119  294755 logs.go:123] Gathering logs for kube-controller-manager [c8021e0d3603] ...
	I0520 11:20:40.795148  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8021e0d3603"
	I0520 11:20:40.846418  294755 logs.go:123] Gathering logs for kube-controller-manager [33738d35cca5] ...
	I0520 11:20:40.846450  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33738d35cca5"
	I0520 11:20:40.909951  294755 logs.go:123] Gathering logs for Docker ...
	I0520 11:20:40.909980  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 11:20:40.960750  294755 logs.go:123] Gathering logs for dmesg ...
	I0520 11:20:40.960794  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:20:41.000040  294755 out.go:304] Setting ErrFile to fd 2...
	I0520 11:20:41.000121  294755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 11:20:41.000202  294755 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0520 11:20:41.000243  294755 out.go:239]   May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.229095    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.229095    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:41.000276  294755 out.go:239]   May 20 11:20:21 old-k8s-version-879853 kubelet[1232]: E0520 11:20:21.228588    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 20 11:20:21 old-k8s-version-879853 kubelet[1232]: E0520 11:20:21.228588    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:41.000327  294755 out.go:239]   May 20 11:20:23 old-k8s-version-879853 kubelet[1232]: E0520 11:20:23.230344    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 20 11:20:23 old-k8s-version-879853 kubelet[1232]: E0520 11:20:23.230344    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:41.000363  294755 out.go:239]   May 20 11:20:32 old-k8s-version-879853 kubelet[1232]: E0520 11:20:32.228101    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	  May 20 11:20:32 old-k8s-version-879853 kubelet[1232]: E0520 11:20:32.228101    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:41.000408  294755 out.go:239]   May 20 11:20:38 old-k8s-version-879853 kubelet[1232]: E0520 11:20:38.228264    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  May 20 11:20:38 old-k8s-version-879853 kubelet[1232]: E0520 11:20:38.228264    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0520 11:20:41.000448  294755 out.go:304] Setting ErrFile to fd 2...
	I0520 11:20:41.000472  294755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:20:51.000820  294755 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0520 11:20:51.013646  294755 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0520 11:20:51.016178  294755 out.go:177] 
	W0520 11:20:51.017929  294755 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0520 11:20:51.017985  294755 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0520 11:20:51.018012  294755 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0520 11:20:51.018017  294755 out.go:239] * 
	* 
	W0520 11:20:51.018917  294755 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:20:51.021916  294755 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-879853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-879853
helpers_test.go:235: (dbg) docker inspect old-k8s-version-879853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fc7eeb36ac68b441ad6611b11c80a506159d575ccc8ec9815b80f4bac177c99d",
	        "Created": "2024-05-20T11:11:56.336888091Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 294950,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-20T11:14:38.755172694Z",
	            "FinishedAt": "2024-05-20T11:14:37.589470136Z"
	        },
	        "Image": "sha256:56620e18f2c2c9a0448fc43c42f840334bd2baea497ff8deae66477dd0dbfecf",
	        "ResolvConfPath": "/var/lib/docker/containers/fc7eeb36ac68b441ad6611b11c80a506159d575ccc8ec9815b80f4bac177c99d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fc7eeb36ac68b441ad6611b11c80a506159d575ccc8ec9815b80f4bac177c99d/hostname",
	        "HostsPath": "/var/lib/docker/containers/fc7eeb36ac68b441ad6611b11c80a506159d575ccc8ec9815b80f4bac177c99d/hosts",
	        "LogPath": "/var/lib/docker/containers/fc7eeb36ac68b441ad6611b11c80a506159d575ccc8ec9815b80f4bac177c99d/fc7eeb36ac68b441ad6611b11c80a506159d575ccc8ec9815b80f4bac177c99d-json.log",
	        "Name": "/old-k8s-version-879853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-879853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-879853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/239981cb71a7ea03c9f41feb446de0d5a1f93e9aecc598d3fecfdf56be27afbe-init/diff:/var/lib/docker/overlay2/5223768ff4f8d0789b9175fc3fdf07e45fc06ea6efae7d6f7831e460b38e1113/diff",
	                "MergedDir": "/var/lib/docker/overlay2/239981cb71a7ea03c9f41feb446de0d5a1f93e9aecc598d3fecfdf56be27afbe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/239981cb71a7ea03c9f41feb446de0d5a1f93e9aecc598d3fecfdf56be27afbe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/239981cb71a7ea03c9f41feb446de0d5a1f93e9aecc598d3fecfdf56be27afbe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-879853",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-879853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-879853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-879853",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-879853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6350ea3633f04bc420bace9763ac79eb3063a1ca4ad788a326edec9ec86a8927",
	            "SandboxKey": "/var/run/docker/netns/6350ea3633f0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-879853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "NetworkID": "bcbc67b107a43206a670d234c4dbb1efc28e72c3091a0ee9bf5d5da105a081e5",
	                    "EndpointID": "93aba0ad1ffdcaa7d01e9bc6e34be94baaae8e4fd60e1a4527c34fe5af0d649d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "old-k8s-version-879853",
	                        "fc7eeb36ac68"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-879853 -n old-k8s-version-879853
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-879853 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-879853 logs -n 25: (1.630326405s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | docker-flags-803587 ssh                                | docker-flags-803587          | jenkins | v1.33.1 | 20 May 24 11:11 UTC | 20 May 24 11:11 UTC |
	|         | sudo systemctl show docker                             |                              |         |         |                     |                     |
	|         | --property=Environment                                 |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| ssh     | docker-flags-803587 ssh                                | docker-flags-803587          | jenkins | v1.33.1 | 20 May 24 11:11 UTC | 20 May 24 11:11 UTC |
	|         | sudo systemctl show docker                             |                              |         |         |                     |                     |
	|         | --property=ExecStart                                   |                              |         |         |                     |                     |
	|         | --no-pager                                             |                              |         |         |                     |                     |
	| delete  | -p docker-flags-803587                                 | docker-flags-803587          | jenkins | v1.33.1 | 20 May 24 11:11 UTC | 20 May 24 11:11 UTC |
	| start   | -p cert-options-156777                                 | cert-options-156777          | jenkins | v1.33.1 | 20 May 24 11:11 UTC | 20 May 24 11:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	| ssh     | cert-options-156777 ssh                                | cert-options-156777          | jenkins | v1.33.1 | 20 May 24 11:11 UTC | 20 May 24 11:11 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-156777 -- sudo                         | cert-options-156777          | jenkins | v1.33.1 | 20 May 24 11:11 UTC | 20 May 24 11:11 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-156777                                 | cert-options-156777          | jenkins | v1.33.1 | 20 May 24 11:11 UTC | 20 May 24 11:11 UTC |
	| start   | -p old-k8s-version-879853                              | old-k8s-version-879853       | jenkins | v1.33.1 | 20 May 24 11:11 UTC | 20 May 24 11:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-748987                              | cert-expiration-748987       | jenkins | v1.33.1 | 20 May 24 11:13 UTC | 20 May 24 11:14 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-748987                              | cert-expiration-748987       | jenkins | v1.33.1 | 20 May 24 11:14 UTC | 20 May 24 11:14 UTC |
	| start   | -p                                                     | default-k8s-diff-port-753976 | jenkins | v1.33.1 | 20 May 24 11:14 UTC | 20 May 24 11:15 UTC |
	|         | default-k8s-diff-port-753976                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-879853        | old-k8s-version-879853       | jenkins | v1.33.1 | 20 May 24 11:14 UTC | 20 May 24 11:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-879853                              | old-k8s-version-879853       | jenkins | v1.33.1 | 20 May 24 11:14 UTC | 20 May 24 11:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-879853             | old-k8s-version-879853       | jenkins | v1.33.1 | 20 May 24 11:14 UTC | 20 May 24 11:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-879853                              | old-k8s-version-879853       | jenkins | v1.33.1 | 20 May 24 11:14 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-753976  | default-k8s-diff-port-753976 | jenkins | v1.33.1 | 20 May 24 11:15 UTC | 20 May 24 11:15 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-753976 | jenkins | v1.33.1 | 20 May 24 11:15 UTC | 20 May 24 11:15 UTC |
	|         | default-k8s-diff-port-753976                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-753976       | default-k8s-diff-port-753976 | jenkins | v1.33.1 | 20 May 24 11:15 UTC | 20 May 24 11:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-753976 | jenkins | v1.33.1 | 20 May 24 11:15 UTC | 20 May 24 11:20 UTC |
	|         | default-k8s-diff-port-753976                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=docker                             |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-753976                           | default-k8s-diff-port-753976 | jenkins | v1.33.1 | 20 May 24 11:20 UTC | 20 May 24 11:20 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-753976 | jenkins | v1.33.1 | 20 May 24 11:20 UTC | 20 May 24 11:20 UTC |
	|         | default-k8s-diff-port-753976                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-753976 | jenkins | v1.33.1 | 20 May 24 11:20 UTC | 20 May 24 11:20 UTC |
	|         | default-k8s-diff-port-753976                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-753976 | jenkins | v1.33.1 | 20 May 24 11:20 UTC | 20 May 24 11:20 UTC |
	|         | default-k8s-diff-port-753976                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-753976 | jenkins | v1.33.1 | 20 May 24 11:20 UTC | 20 May 24 11:20 UTC |
	|         | default-k8s-diff-port-753976                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-601362                                  | embed-certs-601362           | jenkins | v1.33.1 | 20 May 24 11:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         |  --container-runtime=docker                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 11:20:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 11:20:40.153648  309055 out.go:291] Setting OutFile to fd 1 ...
	I0520 11:20:40.153806  309055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:20:40.153811  309055 out.go:304] Setting ErrFile to fd 2...
	I0520 11:20:40.153816  309055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:20:40.154144  309055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
	I0520 11:20:40.154596  309055 out.go:298] Setting JSON to false
	I0520 11:20:40.155660  309055 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3761,"bootTime":1716200280,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1061-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0520 11:20:40.155734  309055 start.go:139] virtualization:  
	I0520 11:20:40.158350  309055 out.go:177] * [embed-certs-601362] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 11:20:40.160782  309055 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 11:20:40.162359  309055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 11:20:40.160965  309055 notify.go:220] Checking for updates...
	I0520 11:20:40.168835  309055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	I0520 11:20:40.170874  309055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	I0520 11:20:40.172800  309055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 11:20:40.174637  309055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 11:20:40.177661  309055 config.go:182] Loaded profile config "old-k8s-version-879853": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0520 11:20:40.177764  309055 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 11:20:40.209128  309055 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 11:20:40.209248  309055 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 11:20:40.311483  309055 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-20 11:20:40.300880776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 11:20:40.311587  309055 docker.go:295] overlay module found
	I0520 11:20:40.313927  309055 out.go:177] * Using the docker driver based on user configuration
	I0520 11:20:40.315817  309055 start.go:297] selected driver: docker
	I0520 11:20:40.315832  309055 start.go:901] validating driver "docker" against <nil>
	I0520 11:20:40.315846  309055 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 11:20:40.316477  309055 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 11:20:40.410643  309055 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-20 11:20:40.398677384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 11:20:40.410810  309055 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 11:20:40.411029  309055 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0520 11:20:40.413829  309055 out.go:177] * Using Docker driver with root privileges
	I0520 11:20:40.416048  309055 cni.go:84] Creating CNI manager for ""
	I0520 11:20:40.416086  309055 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 11:20:40.416104  309055 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 11:20:40.416180  309055 start.go:340] cluster config:
	{Name:embed-certs-601362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-601362 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 11:20:40.419114  309055 out.go:177] * Starting "embed-certs-601362" primary control-plane node in "embed-certs-601362" cluster
	I0520 11:20:40.421377  309055 cache.go:121] Beginning downloading kic base image for docker with docker
	I0520 11:20:40.423047  309055 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0520 11:20:40.424851  309055 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 11:20:40.424891  309055 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 11:20:40.424941  309055 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0520 11:20:40.425135  309055 cache.go:56] Caching tarball of preloaded images
	I0520 11:20:40.425220  309055 preload.go:173] Found /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0520 11:20:40.425231  309055 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0520 11:20:40.425335  309055 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/embed-certs-601362/config.json ...
	I0520 11:20:40.425353  309055 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/embed-certs-601362/config.json: {Name:mk59d7a98675dbb2e5abdd5803478d7bc1e01456 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0520 11:20:40.440270  309055 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0520 11:20:40.440297  309055 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0520 11:20:40.440314  309055 cache.go:194] Successfully downloaded all kic artifacts
	I0520 11:20:40.440357  309055 start.go:360] acquireMachinesLock for embed-certs-601362: {Name:mkc5652e69c99cb8c4d760ae0aff56e086dc8450 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0520 11:20:40.440478  309055 start.go:364] duration metric: took 103.541µs to acquireMachinesLock for "embed-certs-601362"
	I0520 11:20:40.440502  309055 start.go:93] Provisioning new machine with config: &{Name:embed-certs-601362 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:embed-certs-601362 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0520 11:20:40.440577  309055 start.go:125] createHost starting for "" (driver="docker")
	I0520 11:20:39.631205  294755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 11:20:39.646682  294755 api_server.go:72] duration metric: took 5m51.388454699s to wait for apiserver process to appear ...
	I0520 11:20:39.646717  294755 api_server.go:88] waiting for apiserver healthz status ...
	I0520 11:20:39.646814  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0520 11:20:39.681446  294755 logs.go:276] 2 containers: [40bd562563eb ac483fa044f2]
	I0520 11:20:39.681529  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0520 11:20:39.715894  294755 logs.go:276] 2 containers: [730cfcbb3b45 55da4e11e773]
	I0520 11:20:39.715965  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0520 11:20:39.737234  294755 logs.go:276] 2 containers: [005cb5179890 ae2339caab7b]
	I0520 11:20:39.737312  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0520 11:20:39.759250  294755 logs.go:276] 2 containers: [e881ad64ded6 d7bf0c086e3a]
	I0520 11:20:39.759327  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0520 11:20:39.783251  294755 logs.go:276] 2 containers: [d282e88b488b b1eddd083512]
	I0520 11:20:39.783338  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0520 11:20:39.804712  294755 logs.go:276] 2 containers: [c8021e0d3603 33738d35cca5]
	I0520 11:20:39.804798  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0520 11:20:39.828938  294755 logs.go:276] 0 containers: []
	W0520 11:20:39.828962  294755 logs.go:278] No container was found matching "kindnet"
	I0520 11:20:39.829022  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0520 11:20:39.848394  294755 logs.go:276] 1 containers: [1963b29e2abd]
	I0520 11:20:39.848472  294755 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0520 11:20:39.872029  294755 logs.go:276] 2 containers: [2b538e03219d 7d536cdf43d4]
	I0520 11:20:39.872065  294755 logs.go:123] Gathering logs for etcd [55da4e11e773] ...
	I0520 11:20:39.872078  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55da4e11e773"
	I0520 11:20:39.900184  294755 logs.go:123] Gathering logs for kube-scheduler [d7bf0c086e3a] ...
	I0520 11:20:39.900222  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d7bf0c086e3a"
	I0520 11:20:39.940252  294755 logs.go:123] Gathering logs for describe nodes ...
	I0520 11:20:39.940281  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0520 11:20:40.136266  294755 logs.go:123] Gathering logs for kube-apiserver [40bd562563eb] ...
	I0520 11:20:40.136296  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 40bd562563eb"
	I0520 11:20:40.199886  294755 logs.go:123] Gathering logs for kube-apiserver [ac483fa044f2] ...
	I0520 11:20:40.199922  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac483fa044f2"
	I0520 11:20:40.291446  294755 logs.go:123] Gathering logs for etcd [730cfcbb3b45] ...
	I0520 11:20:40.291524  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 730cfcbb3b45"
	I0520 11:20:40.324901  294755 logs.go:123] Gathering logs for coredns [005cb5179890] ...
	I0520 11:20:40.324983  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 005cb5179890"
	I0520 11:20:40.355094  294755 logs.go:123] Gathering logs for kube-proxy [d282e88b488b] ...
	I0520 11:20:40.355120  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d282e88b488b"
	I0520 11:20:40.388428  294755 logs.go:123] Gathering logs for kube-proxy [b1eddd083512] ...
	I0520 11:20:40.388455  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b1eddd083512"
	I0520 11:20:40.418991  294755 logs.go:123] Gathering logs for kubernetes-dashboard [1963b29e2abd] ...
	I0520 11:20:40.419015  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1963b29e2abd"
	I0520 11:20:40.446676  294755 logs.go:123] Gathering logs for kubelet ...
	I0520 11:20:40.446703  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0520 11:20:40.506686  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.784714    1232 reflector.go:138] object-"kube-system"/"coredns-token-k8wk7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-k8wk7" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.506930  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.784913    1232 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.507190  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785143    1232 reflector.go:138] object-"kube-system"/"storage-provisioner-token-8x5hz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-8x5hz" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.507460  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785319    1232 reflector.go:138] object-"kube-system"/"metrics-server-token-4g8dg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-4g8dg" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.507746  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785509    1232 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.507993  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785712    1232 reflector.go:138] object-"kube-system"/"kube-proxy-token-4lffw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-4lffw" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.508251  294755 logs.go:138] Found kubelet problem: May 20 11:15:01 old-k8s-version-879853 kubelet[1232]: E0520 11:15:01.785913    1232 reflector.go:138] object-"default"/"default-token-44xhs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-44xhs" is forbidden: User "system:node:old-k8s-version-879853" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-879853' and this object
	W0520 11:20:40.514695  294755 logs.go:138] Found kubelet problem: May 20 11:15:03 old-k8s-version-879853 kubelet[1232]: E0520 11:15:03.901214    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:40.515477  294755 logs.go:138] Found kubelet problem: May 20 11:15:04 old-k8s-version-879853 kubelet[1232]: E0520 11:15:04.328349    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.518645  294755 logs.go:138] Found kubelet problem: May 20 11:15:15 old-k8s-version-879853 kubelet[1232]: E0520 11:15:15.273676    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:40.523468  294755 logs.go:138] Found kubelet problem: May 20 11:15:22 old-k8s-version-879853 kubelet[1232]: E0520 11:15:22.024884    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:40.523930  294755 logs.go:138] Found kubelet problem: May 20 11:15:22 old-k8s-version-879853 kubelet[1232]: E0520 11:15:22.730051    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.524327  294755 logs.go:138] Found kubelet problem: May 20 11:15:28 old-k8s-version-879853 kubelet[1232]: E0520 11:15:28.228108    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.529134  294755 logs.go:138] Found kubelet problem: May 20 11:15:34 old-k8s-version-879853 kubelet[1232]: E0520 11:15:34.744287    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:40.529709  294755 logs.go:138] Found kubelet problem: May 20 11:15:34 old-k8s-version-879853 kubelet[1232]: E0520 11:15:34.859501    1232 pod_workers.go:191] Error syncing pod b1d644d9-1159-48f3-95ed-e9f20098fc95 ("storage-provisioner_kube-system(b1d644d9-1159-48f3-95ed-e9f20098fc95)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b1d644d9-1159-48f3-95ed-e9f20098fc95)"
	W0520 11:20:40.531951  294755 logs.go:138] Found kubelet problem: May 20 11:15:39 old-k8s-version-879853 kubelet[1232]: E0520 11:15:39.300402    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:40.532631  294755 logs.go:138] Found kubelet problem: May 20 11:15:49 old-k8s-version-879853 kubelet[1232]: E0520 11:15:49.228221    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.532816  294755 logs.go:138] Found kubelet problem: May 20 11:15:51 old-k8s-version-879853 kubelet[1232]: E0520 11:15:51.266478    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.535296  294755 logs.go:138] Found kubelet problem: May 20 11:16:01 old-k8s-version-879853 kubelet[1232]: E0520 11:16:01.699755    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:40.535494  294755 logs.go:138] Found kubelet problem: May 20 11:16:03 old-k8s-version-879853 kubelet[1232]: E0520 11:16:03.235985    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.535692  294755 logs.go:138] Found kubelet problem: May 20 11:16:14 old-k8s-version-879853 kubelet[1232]: E0520 11:16:14.227928    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.535875  294755 logs.go:138] Found kubelet problem: May 20 11:16:15 old-k8s-version-879853 kubelet[1232]: E0520 11:16:15.228541    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.538290  294755 logs.go:138] Found kubelet problem: May 20 11:16:27 old-k8s-version-879853 kubelet[1232]: E0520 11:16:27.259598    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:40.538542  294755 logs.go:138] Found kubelet problem: May 20 11:16:29 old-k8s-version-879853 kubelet[1232]: E0520 11:16:29.239845    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.538762  294755 logs.go:138] Found kubelet problem: May 20 11:16:39 old-k8s-version-879853 kubelet[1232]: E0520 11:16:39.229695    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.541076  294755 logs.go:138] Found kubelet problem: May 20 11:16:42 old-k8s-version-879853 kubelet[1232]: E0520 11:16:42.700115    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:40.541296  294755 logs.go:138] Found kubelet problem: May 20 11:16:51 old-k8s-version-879853 kubelet[1232]: E0520 11:16:51.246627    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.541529  294755 logs.go:138] Found kubelet problem: May 20 11:16:54 old-k8s-version-879853 kubelet[1232]: E0520 11:16:54.228244    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.541751  294755 logs.go:138] Found kubelet problem: May 20 11:17:06 old-k8s-version-879853 kubelet[1232]: E0520 11:17:06.228973    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.542010  294755 logs.go:138] Found kubelet problem: May 20 11:17:06 old-k8s-version-879853 kubelet[1232]: E0520 11:17:06.229350    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.542236  294755 logs.go:138] Found kubelet problem: May 20 11:17:20 old-k8s-version-879853 kubelet[1232]: E0520 11:17:20.228064    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.542446  294755 logs.go:138] Found kubelet problem: May 20 11:17:21 old-k8s-version-879853 kubelet[1232]: E0520 11:17:21.228613    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.542656  294755 logs.go:138] Found kubelet problem: May 20 11:17:32 old-k8s-version-879853 kubelet[1232]: E0520 11:17:32.228111    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.542881  294755 logs.go:138] Found kubelet problem: May 20 11:17:34 old-k8s-version-879853 kubelet[1232]: E0520 11:17:34.243800    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.543091  294755 logs.go:138] Found kubelet problem: May 20 11:17:46 old-k8s-version-879853 kubelet[1232]: E0520 11:17:46.228054    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.543315  294755 logs.go:138] Found kubelet problem: May 20 11:17:48 old-k8s-version-879853 kubelet[1232]: E0520 11:17:48.228327    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.543539  294755 logs.go:138] Found kubelet problem: May 20 11:18:00 old-k8s-version-879853 kubelet[1232]: E0520 11:18:00.229869    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.545695  294755 logs.go:138] Found kubelet problem: May 20 11:18:00 old-k8s-version-879853 kubelet[1232]: E0520 11:18:00.274754    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0520 11:20:40.548194  294755 logs.go:138] Found kubelet problem: May 20 11:18:11 old-k8s-version-879853 kubelet[1232]: E0520 11:18:11.688233    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	W0520 11:20:40.548446  294755 logs.go:138] Found kubelet problem: May 20 11:18:15 old-k8s-version-879853 kubelet[1232]: E0520 11:18:15.228362    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.548678  294755 logs.go:138] Found kubelet problem: May 20 11:18:23 old-k8s-version-879853 kubelet[1232]: E0520 11:18:23.238275    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.548892  294755 logs.go:138] Found kubelet problem: May 20 11:18:26 old-k8s-version-879853 kubelet[1232]: E0520 11:18:26.228357    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.549140  294755 logs.go:138] Found kubelet problem: May 20 11:18:34 old-k8s-version-879853 kubelet[1232]: E0520 11:18:34.227935    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.549355  294755 logs.go:138] Found kubelet problem: May 20 11:18:40 old-k8s-version-879853 kubelet[1232]: E0520 11:18:40.228209    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.549599  294755 logs.go:138] Found kubelet problem: May 20 11:18:49 old-k8s-version-879853 kubelet[1232]: E0520 11:18:49.235833    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.549810  294755 logs.go:138] Found kubelet problem: May 20 11:18:51 old-k8s-version-879853 kubelet[1232]: E0520 11:18:51.228282    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.550043  294755 logs.go:138] Found kubelet problem: May 20 11:19:03 old-k8s-version-879853 kubelet[1232]: E0520 11:19:03.230228    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.550286  294755 logs.go:138] Found kubelet problem: May 20 11:19:06 old-k8s-version-879853 kubelet[1232]: E0520 11:19:06.228367    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.550514  294755 logs.go:138] Found kubelet problem: May 20 11:19:15 old-k8s-version-879853 kubelet[1232]: E0520 11:19:15.232306    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.550729  294755 logs.go:138] Found kubelet problem: May 20 11:19:19 old-k8s-version-879853 kubelet[1232]: E0520 11:19:19.228241    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.550957  294755 logs.go:138] Found kubelet problem: May 20 11:19:26 old-k8s-version-879853 kubelet[1232]: E0520 11:19:26.228031    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.551289  294755 logs.go:138] Found kubelet problem: May 20 11:19:30 old-k8s-version-879853 kubelet[1232]: E0520 11:19:30.228171    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.551527  294755 logs.go:138] Found kubelet problem: May 20 11:19:40 old-k8s-version-879853 kubelet[1232]: E0520 11:19:40.228690    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.551738  294755 logs.go:138] Found kubelet problem: May 20 11:19:41 old-k8s-version-879853 kubelet[1232]: E0520 11:19:41.228364    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.551963  294755 logs.go:138] Found kubelet problem: May 20 11:19:53 old-k8s-version-879853 kubelet[1232]: E0520 11:19:53.236437    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.552217  294755 logs.go:138] Found kubelet problem: May 20 11:19:56 old-k8s-version-879853 kubelet[1232]: E0520 11:19:56.228010    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.552510  294755 logs.go:138] Found kubelet problem: May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.228563    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.552730  294755 logs.go:138] Found kubelet problem: May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.229095    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.552958  294755 logs.go:138] Found kubelet problem: May 20 11:20:21 old-k8s-version-879853 kubelet[1232]: E0520 11:20:21.228588    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.553181  294755 logs.go:138] Found kubelet problem: May 20 11:20:23 old-k8s-version-879853 kubelet[1232]: E0520 11:20:23.230344    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.553410  294755 logs.go:138] Found kubelet problem: May 20 11:20:32 old-k8s-version-879853 kubelet[1232]: E0520 11:20:32.228101    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:40.553628  294755 logs.go:138] Found kubelet problem: May 20 11:20:38 old-k8s-version-879853 kubelet[1232]: E0520 11:20:38.228264    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0520 11:20:40.553657  294755 logs.go:123] Gathering logs for container status ...
	I0520 11:20:40.553689  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0520 11:20:40.647578  294755 logs.go:123] Gathering logs for storage-provisioner [7d536cdf43d4] ...
	I0520 11:20:40.647718  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d536cdf43d4"
	I0520 11:20:40.701383  294755 logs.go:123] Gathering logs for storage-provisioner [2b538e03219d] ...
	I0520 11:20:40.701409  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2b538e03219d"
	I0520 11:20:40.737898  294755 logs.go:123] Gathering logs for coredns [ae2339caab7b] ...
	I0520 11:20:40.737935  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2339caab7b"
	I0520 11:20:40.769964  294755 logs.go:123] Gathering logs for kube-scheduler [e881ad64ded6] ...
	I0520 11:20:40.770002  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e881ad64ded6"
	I0520 11:20:40.795119  294755 logs.go:123] Gathering logs for kube-controller-manager [c8021e0d3603] ...
	I0520 11:20:40.795148  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8021e0d3603"
	I0520 11:20:40.846418  294755 logs.go:123] Gathering logs for kube-controller-manager [33738d35cca5] ...
	I0520 11:20:40.846450  294755 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33738d35cca5"
	I0520 11:20:40.909951  294755 logs.go:123] Gathering logs for Docker ...
	I0520 11:20:40.909980  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0520 11:20:40.960750  294755 logs.go:123] Gathering logs for dmesg ...
	I0520 11:20:40.960794  294755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0520 11:20:41.000040  294755 out.go:304] Setting ErrFile to fd 2...
	I0520 11:20:41.000121  294755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0520 11:20:41.000202  294755 out.go:239] X Problems detected in kubelet:
	W0520 11:20:41.000243  294755 out.go:239]   May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.229095    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:41.000276  294755 out.go:239]   May 20 11:20:21 old-k8s-version-879853 kubelet[1232]: E0520 11:20:21.228588    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:41.000327  294755 out.go:239]   May 20 11:20:23 old-k8s-version-879853 kubelet[1232]: E0520 11:20:23.230344    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0520 11:20:41.000363  294755 out.go:239]   May 20 11:20:32 old-k8s-version-879853 kubelet[1232]: E0520 11:20:32.228101    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	W0520 11:20:41.000408  294755 out.go:239]   May 20 11:20:38 old-k8s-version-879853 kubelet[1232]: E0520 11:20:38.228264    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0520 11:20:41.000448  294755 out.go:304] Setting ErrFile to fd 2...
	I0520 11:20:41.000472  294755 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 11:20:40.443330  309055 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0520 11:20:40.443681  309055 start.go:159] libmachine.API.Create for "embed-certs-601362" (driver="docker")
	I0520 11:20:40.443709  309055 client.go:168] LocalClient.Create starting
	I0520 11:20:40.443771  309055 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem
	I0520 11:20:40.443804  309055 main.go:141] libmachine: Decoding PEM data...
	I0520 11:20:40.443821  309055 main.go:141] libmachine: Parsing certificate...
	I0520 11:20:40.443885  309055 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18925-2151/.minikube/certs/cert.pem
	I0520 11:20:40.443903  309055 main.go:141] libmachine: Decoding PEM data...
	I0520 11:20:40.443913  309055 main.go:141] libmachine: Parsing certificate...
	I0520 11:20:40.444379  309055 cli_runner.go:164] Run: docker network inspect embed-certs-601362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0520 11:20:40.477764  309055 cli_runner.go:211] docker network inspect embed-certs-601362 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0520 11:20:40.477869  309055 network_create.go:281] running [docker network inspect embed-certs-601362] to gather additional debugging logs...
	I0520 11:20:40.477890  309055 cli_runner.go:164] Run: docker network inspect embed-certs-601362
	W0520 11:20:40.502245  309055 cli_runner.go:211] docker network inspect embed-certs-601362 returned with exit code 1
	I0520 11:20:40.502275  309055 network_create.go:284] error running [docker network inspect embed-certs-601362]: docker network inspect embed-certs-601362: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-601362 not found
	I0520 11:20:40.502289  309055 network_create.go:286] output of [docker network inspect embed-certs-601362]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-601362 not found
	
	** /stderr **
	I0520 11:20:40.502387  309055 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0520 11:20:40.532469  309055 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2901dbd6a710 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:5c:c2:d9:d2} reservation:<nil>}
	I0520 11:20:40.533157  309055 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-c8b83b019e12 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:8a:de:6c:91} reservation:<nil>}
	I0520 11:20:40.534276  309055 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-723ab3c1b5fc IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:85:35:1a:94} reservation:<nil>}
	I0520 11:20:40.535017  309055 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400177a0c0}
	I0520 11:20:40.535053  309055 network_create.go:124] attempt to create docker network embed-certs-601362 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0520 11:20:40.535161  309055 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-601362 embed-certs-601362
	I0520 11:20:40.615747  309055 network_create.go:108] docker network embed-certs-601362 192.168.76.0/24 created
	I0520 11:20:40.615798  309055 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-601362" container
	I0520 11:20:40.615914  309055 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0520 11:20:40.629732  309055 cli_runner.go:164] Run: docker volume create embed-certs-601362 --label name.minikube.sigs.k8s.io=embed-certs-601362 --label created_by.minikube.sigs.k8s.io=true
	I0520 11:20:40.651315  309055 oci.go:103] Successfully created a docker volume embed-certs-601362
	I0520 11:20:40.651402  309055 cli_runner.go:164] Run: docker run --rm --name embed-certs-601362-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-601362 --entrypoint /usr/bin/test -v embed-certs-601362:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0520 11:20:41.317597  309055 oci.go:107] Successfully prepared a docker volume embed-certs-601362
	I0520 11:20:41.317654  309055 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 11:20:41.317674  309055 kic.go:194] Starting extracting preloaded images to volume ...
	I0520 11:20:41.317763  309055 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-601362:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0520 11:20:45.384241  309055 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-601362:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.066440085s)
	I0520 11:20:45.384272  309055 kic.go:203] duration metric: took 4.066594875s to extract preloaded images to volume ...
	W0520 11:20:45.384418  309055 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0520 11:20:45.384549  309055 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0520 11:20:45.456712  309055 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-601362 --name embed-certs-601362 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-601362 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-601362 --network embed-certs-601362 --ip 192.168.76.2 --volume embed-certs-601362:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0520 11:20:45.817540  309055 cli_runner.go:164] Run: docker container inspect embed-certs-601362 --format={{.State.Running}}
	I0520 11:20:45.837536  309055 cli_runner.go:164] Run: docker container inspect embed-certs-601362 --format={{.State.Status}}
	I0520 11:20:45.869256  309055 cli_runner.go:164] Run: docker exec embed-certs-601362 stat /var/lib/dpkg/alternatives/iptables
	I0520 11:20:45.933633  309055 oci.go:144] the created container "embed-certs-601362" has a running status.
	I0520 11:20:45.933665  309055 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18925-2151/.minikube/machines/embed-certs-601362/id_rsa...
	I0520 11:20:46.804897  309055 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18925-2151/.minikube/machines/embed-certs-601362/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0520 11:20:46.839841  309055 cli_runner.go:164] Run: docker container inspect embed-certs-601362 --format={{.State.Status}}
	I0520 11:20:46.862971  309055 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0520 11:20:46.862993  309055 kic_runner.go:114] Args: [docker exec --privileged embed-certs-601362 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0520 11:20:46.956880  309055 cli_runner.go:164] Run: docker container inspect embed-certs-601362 --format={{.State.Status}}
	I0520 11:20:46.975131  309055 machine.go:94] provisionDockerMachine start ...
	I0520 11:20:46.975312  309055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-601362
	I0520 11:20:46.997158  309055 main.go:141] libmachine: Using SSH client type: native
	I0520 11:20:46.997559  309055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0520 11:20:46.997575  309055 main.go:141] libmachine: About to run SSH command:
	hostname
	I0520 11:20:47.140819  309055 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-601362
	
	I0520 11:20:47.140841  309055 ubuntu.go:169] provisioning hostname "embed-certs-601362"
	I0520 11:20:47.140905  309055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-601362
	I0520 11:20:47.160198  309055 main.go:141] libmachine: Using SSH client type: native
	I0520 11:20:47.160443  309055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0520 11:20:47.160456  309055 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-601362 && echo "embed-certs-601362" | sudo tee /etc/hostname
	I0520 11:20:47.311658  309055 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-601362
	
	I0520 11:20:47.311736  309055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-601362
	I0520 11:20:47.328611  309055 main.go:141] libmachine: Using SSH client type: native
	I0520 11:20:47.328859  309055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0520 11:20:47.328877  309055 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-601362' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-601362/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-601362' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0520 11:20:47.457785  309055 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0520 11:20:47.457812  309055 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18925-2151/.minikube CaCertPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18925-2151/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18925-2151/.minikube}
	I0520 11:20:47.457834  309055 ubuntu.go:177] setting up certificates
	I0520 11:20:47.457848  309055 provision.go:84] configureAuth start
	I0520 11:20:47.457931  309055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-601362
	I0520 11:20:47.476162  309055 provision.go:143] copyHostCerts
	I0520 11:20:47.476231  309055 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-2151/.minikube/ca.pem, removing ...
	I0520 11:20:47.476245  309055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-2151/.minikube/ca.pem
	I0520 11:20:47.476331  309055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18925-2151/.minikube/ca.pem (1078 bytes)
	I0520 11:20:47.476428  309055 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-2151/.minikube/cert.pem, removing ...
	I0520 11:20:47.476439  309055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-2151/.minikube/cert.pem
	I0520 11:20:47.476467  309055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18925-2151/.minikube/cert.pem (1123 bytes)
	I0520 11:20:47.476522  309055 exec_runner.go:144] found /home/jenkins/minikube-integration/18925-2151/.minikube/key.pem, removing ...
	I0520 11:20:47.476533  309055 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18925-2151/.minikube/key.pem
	I0520 11:20:47.476574  309055 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18925-2151/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18925-2151/.minikube/key.pem (1675 bytes)
	I0520 11:20:47.476666  309055 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18925-2151/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca-key.pem org=jenkins.embed-certs-601362 san=[127.0.0.1 192.168.76.2 embed-certs-601362 localhost minikube]
	I0520 11:20:47.821154  309055 provision.go:177] copyRemoteCerts
	I0520 11:20:47.821222  309055 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0520 11:20:47.821266  309055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-601362
	I0520 11:20:47.837649  309055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/embed-certs-601362/id_rsa Username:docker}
	I0520 11:20:47.930328  309055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0520 11:20:47.955011  309055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0520 11:20:47.981548  309055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0520 11:20:48.007229  309055 provision.go:87] duration metric: took 549.363417ms to configureAuth
	I0520 11:20:48.007255  309055 ubuntu.go:193] setting minikube options for container-runtime
	I0520 11:20:48.007451  309055 config.go:182] Loaded profile config "embed-certs-601362": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 11:20:48.007532  309055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-601362
	I0520 11:20:48.034961  309055 main.go:141] libmachine: Using SSH client type: native
	I0520 11:20:48.035240  309055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0520 11:20:48.035250  309055 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0520 11:20:48.166456  309055 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0520 11:20:48.166488  309055 ubuntu.go:71] root file system type: overlay
	I0520 11:20:48.166610  309055 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0520 11:20:48.166680  309055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-601362
	I0520 11:20:48.183723  309055 main.go:141] libmachine: Using SSH client type: native
	I0520 11:20:48.183957  309055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0520 11:20:48.184037  309055 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0520 11:20:48.320500  309055 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0520 11:20:48.320586  309055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-601362
	I0520 11:20:48.338865  309055 main.go:141] libmachine: Using SSH client type: native
	I0520 11:20:48.339117  309055 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2b30] 0x3e5390 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0520 11:20:48.339140  309055 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0520 11:20:49.100567  309055 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-20 11:20:48.316831329 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0520 11:20:49.100598  309055 machine.go:97] duration metric: took 2.125448335s to provisionDockerMachine
	I0520 11:20:49.100611  309055 client.go:171] duration metric: took 8.656895785s to LocalClient.Create
	I0520 11:20:49.100624  309055 start.go:167] duration metric: took 8.656944434s to libmachine.API.Create "embed-certs-601362"
	I0520 11:20:49.100632  309055 start.go:293] postStartSetup for "embed-certs-601362" (driver="docker")
	I0520 11:20:49.100642  309055 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0520 11:20:49.100708  309055 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0520 11:20:49.100757  309055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-601362
	I0520 11:20:49.117475  309055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/embed-certs-601362/id_rsa Username:docker}
	I0520 11:20:49.213895  309055 ssh_runner.go:195] Run: cat /etc/os-release
	I0520 11:20:49.217002  309055 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0520 11:20:49.217075  309055 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0520 11:20:49.217095  309055 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0520 11:20:49.217105  309055 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0520 11:20:49.217115  309055 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-2151/.minikube/addons for local assets ...
	I0520 11:20:49.217182  309055 filesync.go:126] Scanning /home/jenkins/minikube-integration/18925-2151/.minikube/files for local assets ...
	I0520 11:20:49.217276  309055 filesync.go:149] local asset: /home/jenkins/minikube-integration/18925-2151/.minikube/files/etc/ssl/certs/75122.pem -> 75122.pem in /etc/ssl/certs
	I0520 11:20:49.217383  309055 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0520 11:20:49.225611  309055 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18925-2151/.minikube/files/etc/ssl/certs/75122.pem --> /etc/ssl/certs/75122.pem (1708 bytes)
	I0520 11:20:49.258734  309055 start.go:296] duration metric: took 158.088628ms for postStartSetup
	I0520 11:20:49.259135  309055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-601362
	I0520 11:20:49.276711  309055 profile.go:143] Saving config to /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/embed-certs-601362/config.json ...
	I0520 11:20:49.276981  309055 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 11:20:49.277026  309055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-601362
	I0520 11:20:49.292638  309055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/embed-certs-601362/id_rsa Username:docker}
	I0520 11:20:49.381678  309055 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0520 11:20:49.385884  309055 start.go:128] duration metric: took 8.945290949s to createHost
	I0520 11:20:49.385918  309055 start.go:83] releasing machines lock for "embed-certs-601362", held for 8.945432365s
	I0520 11:20:49.386017  309055 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-601362
	I0520 11:20:49.401318  309055 ssh_runner.go:195] Run: cat /version.json
	I0520 11:20:49.401374  309055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-601362
	I0520 11:20:49.401606  309055 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0520 11:20:49.401652  309055 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-601362
	I0520 11:20:49.421628  309055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/embed-certs-601362/id_rsa Username:docker}
	I0520 11:20:49.431817  309055 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/embed-certs-601362/id_rsa Username:docker}
	I0520 11:20:49.516573  309055 ssh_runner.go:195] Run: systemctl --version
	I0520 11:20:49.636672  309055 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0520 11:20:49.641217  309055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0520 11:20:49.667269  309055 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0520 11:20:49.667343  309055 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0520 11:20:49.701658  309055 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0520 11:20:49.701683  309055 start.go:494] detecting cgroup driver to use...
	I0520 11:20:49.701713  309055 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0520 11:20:49.701809  309055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:20:49.719763  309055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0520 11:20:49.729951  309055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0520 11:20:49.740021  309055 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0520 11:20:49.740104  309055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0520 11:20:49.750071  309055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 11:20:49.759858  309055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0520 11:20:49.770082  309055 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0520 11:20:49.785768  309055 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0520 11:20:49.796375  309055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0520 11:20:49.808022  309055 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0520 11:20:49.819510  309055 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0520 11:20:49.829298  309055 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0520 11:20:49.837695  309055 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0520 11:20:49.845644  309055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:20:49.930727  309055 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0520 11:20:50.048408  309055 start.go:494] detecting cgroup driver to use...
	I0520 11:20:50.048509  309055 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0520 11:20:50.048592  309055 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0520 11:20:50.069012  309055 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0520 11:20:50.069157  309055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0520 11:20:50.081717  309055 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0520 11:20:50.099160  309055 ssh_runner.go:195] Run: which cri-dockerd
	I0520 11:20:50.105761  309055 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0520 11:20:50.116603  309055 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0520 11:20:50.143649  309055 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0520 11:20:51.000820  294755 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0520 11:20:51.013646  294755 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0520 11:20:51.016178  294755 out.go:177] 
	W0520 11:20:51.017929  294755 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0520 11:20:51.017985  294755 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0520 11:20:51.018012  294755 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0520 11:20:51.018017  294755 out.go:239] * 
	W0520 11:20:51.018917  294755 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0520 11:20:51.021916  294755 out.go:177] 
	I0520 11:20:50.249174  309055 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0520 11:20:50.360456  309055 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0520 11:20:50.360579  309055 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0520 11:20:50.384808  309055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:20:50.477756  309055 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0520 11:20:50.755472  309055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0520 11:20:50.768071  309055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 11:20:50.780311  309055 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0520 11:20:50.872962  309055 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0520 11:20:50.959836  309055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:20:51.103773  309055 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0520 11:20:51.123102  309055 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0520 11:20:51.136492  309055 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0520 11:20:51.306137  309055 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0520 11:20:51.455971  309055 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0520 11:20:51.456047  309055 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0520 11:20:51.463125  309055 start.go:562] Will wait 60s for crictl version
	I0520 11:20:51.463193  309055 ssh_runner.go:195] Run: which crictl
	I0520 11:20:51.473880  309055 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0520 11:20:51.516688  309055 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0520 11:20:51.516760  309055 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0520 11:20:51.545343  309055 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	
	
	==> Docker <==
	May 20 11:20:29 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:29 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:29 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:29 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:29 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:29 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:29 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:39 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:39 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:39 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:40 old-k8s-version-879853 dockerd[991]: 2024/05/20 11:20:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 20 11:20:49 old-k8s-version-879853 dockerd[991]: time="2024-05-20T11:20:49.260886084Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" spanID=bf187498522a74c4 traceID=abd9e4211329dbe1d79fb0f45903836b
	May 20 11:20:49 old-k8s-version-879853 dockerd[991]: time="2024-05-20T11:20:49.261087078Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" spanID=bf187498522a74c4 traceID=abd9e4211329dbe1d79fb0f45903836b
	May 20 11:20:49 old-k8s-version-879853 dockerd[991]: time="2024-05-20T11:20:49.263771988Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" spanID=bf187498522a74c4 traceID=abd9e4211329dbe1d79fb0f45903836b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b538e03219d7       ba04bb24b9575                                                                                         5 minutes ago       Running             storage-provisioner       2                   661d10d638eed       storage-provisioner
	1963b29e2abd2       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93        5 minutes ago       Running             kubernetes-dashboard      0                   b978f147b3700       kubernetes-dashboard-cd95d586-mkd7n
	005cb51798905       db91994f4ee8f                                                                                         5 minutes ago       Running             coredns                   1                   575ec894219cb       coredns-74ff55c5b-6l85w
	d282e88b488b5       25a5233254979                                                                                         5 minutes ago       Running             kube-proxy                1                   83cc61951d8e0       kube-proxy-2q9x5
	7d536cdf43d4f       ba04bb24b9575                                                                                         5 minutes ago       Exited              storage-provisioner       1                   661d10d638eed       storage-provisioner
	3109b2c5ae746       1611cd07b61d5                                                                                         5 minutes ago       Running             busybox                   1                   a9fd113bd2fe2       busybox
	e881ad64ded62       e7605f88f17d6                                                                                         6 minutes ago       Running             kube-scheduler            1                   292d9ea71ee49       kube-scheduler-old-k8s-version-879853
	40bd562563ebf       2c08bbbc02d3a                                                                                         6 minutes ago       Running             kube-apiserver            1                   3dff459b24b70       kube-apiserver-old-k8s-version-879853
	c8021e0d3603d       1df8a2b116bd1                                                                                         6 minutes ago       Running             kube-controller-manager   1                   8648a39426383       kube-controller-manager-old-k8s-version-879853
	730cfcbb3b453       05b738aa1bc63                                                                                         6 minutes ago       Running             etcd                      1                   c7adfce7dfc2f       etcd-old-k8s-version-879853
	86980f96cf04f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   6 minutes ago       Exited              busybox                   0                   bd3efe6fff048       busybox
	b1eddd083512b       25a5233254979                                                                                         8 minutes ago       Exited              kube-proxy                0                   2284ca278a1e6       kube-proxy-2q9x5
	ae2339caab7b8       db91994f4ee8f                                                                                         8 minutes ago       Exited              coredns                   0                   685014cbcbdf7       coredns-74ff55c5b-6l85w
	d7bf0c086e3a8       e7605f88f17d6                                                                                         8 minutes ago       Exited              kube-scheduler            0                   9d1d813c83d5f       kube-scheduler-old-k8s-version-879853
	55da4e11e7737       05b738aa1bc63                                                                                         8 minutes ago       Exited              etcd                      0                   34b36be3cd554       etcd-old-k8s-version-879853
	33738d35cca5e       1df8a2b116bd1                                                                                         8 minutes ago       Exited              kube-controller-manager   0                   93f35d35bce0c       kube-controller-manager-old-k8s-version-879853
	ac483fa044f27       2c08bbbc02d3a                                                                                         8 minutes ago       Exited              kube-apiserver            0                   c38da4f33c260       kube-apiserver-old-k8s-version-879853
	
	
	==> coredns [005cb5179890] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:59153 - 37346 "HINFO IN 3509878255759386750.1471500059673684815. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019744504s
	
	
	==> coredns [ae2339caab7b] <==
	I0520 11:13:21.719705       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-20 11:12:51.719063768 +0000 UTC m=+0.020439956) (total time: 30.000530645s):
	Trace[2019727887]: [30.000530645s] [30.000530645s] END
	E0520 11:13:21.719737       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0520 11:13:21.721835       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-20 11:12:51.721464965 +0000 UTC m=+0.022841161) (total time: 30.000347335s):
	Trace[939984059]: [30.000347335s] [30.000347335s] END
	E0520 11:13:21.721853       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0520 11:13:21.722121       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-05-20 11:12:51.721814545 +0000 UTC m=+0.023190733) (total time: 30.000292869s):
	Trace[1474941318]: [30.000292869s] [30.000292869s] END
	E0520 11:13:21.722134       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	[INFO] Reloading complete
	[INFO] 127.0.0.1:53542 - 38693 "HINFO IN 655767211909321781.6161317165832143289. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011629173s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-879853
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-879853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1b01766f8cca110acded3a48649b81463b982c91
	                    minikube.k8s.io/name=old-k8s-version-879853
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_20T11_12_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 20 May 2024 11:12:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-879853
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 20 May 2024 11:20:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 20 May 2024 11:15:53 +0000   Mon, 20 May 2024 11:12:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 20 May 2024 11:15:53 +0000   Mon, 20 May 2024 11:12:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 20 May 2024 11:15:53 +0000   Mon, 20 May 2024 11:12:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 20 May 2024 11:15:53 +0000   Mon, 20 May 2024 11:12:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-879853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022428Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022428Ki
	  pods:               110
	System Info:
	  Machine ID:                 3c16ac2350684e0195b62451ab80a213
	  System UUID:                9c2e95d2-65a3-4d25-ba19-11ba3032fb11
	  Boot ID:                    360c613b-7d2d-4efb-a784-5066f036d5dd
	  Kernel Version:             5.15.0-1061-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 coredns-74ff55c5b-6l85w                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m2s
	  kube-system                 etcd-old-k8s-version-879853                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-apiserver-old-k8s-version-879853             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-controller-manager-old-k8s-version-879853    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 kube-proxy-2q9x5                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m2s
	  kube-system                 kube-scheduler-old-k8s-version-879853             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	  kube-system                 metrics-server-9975d5f86-24f8f                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m25s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-cznvg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-mkd7n               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (4%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m30s (x5 over 8m30s)  kubelet     Node old-k8s-version-879853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s (x4 over 8m30s)  kubelet     Node old-k8s-version-879853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s (x4 over 8m30s)  kubelet     Node old-k8s-version-879853 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m14s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m14s                  kubelet     Node old-k8s-version-879853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m14s                  kubelet     Node old-k8s-version-879853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m14s                  kubelet     Node old-k8s-version-879853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m14s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m4s                   kubelet     Node old-k8s-version-879853 status is now: NodeReady
	  Normal  Starting                 8m                     kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m1s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m1s (x8 over 6m1s)    kubelet     Node old-k8s-version-879853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s (x8 over 6m1s)    kubelet     Node old-k8s-version-879853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s (x7 over 6m1s)    kubelet     Node old-k8s-version-879853 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m1s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m48s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000755] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000937] FS-Cache: N-cookie d=000000006dbb735b{9p.inode} n=00000000dae70b24
	[  +0.001054] FS-Cache: N-key=[8] '8d385c0100000000'
	[  +0.003021] FS-Cache: Duplicate cookie detected
	[  +0.000676] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001000] FS-Cache: O-cookie d=000000006dbb735b{9p.inode} n=00000000123da853
	[  +0.001116] FS-Cache: O-key=[8] '8d385c0100000000'
	[  +0.000707] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000950] FS-Cache: N-cookie d=000000006dbb735b{9p.inode} n=00000000de49adbd
	[  +0.001032] FS-Cache: N-key=[8] '8d385c0100000000'
	[  +2.409710] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000964] FS-Cache: O-cookie d=000000006dbb735b{9p.inode} n=0000000067772ff7
	[  +0.001052] FS-Cache: O-key=[8] '8c385c0100000000'
	[  +0.000759] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000941] FS-Cache: N-cookie d=000000006dbb735b{9p.inode} n=000000001e2817e2
	[  +0.001113] FS-Cache: N-key=[8] '8c385c0100000000'
	[  +0.250072] FS-Cache: Duplicate cookie detected
	[  +0.000744] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001009] FS-Cache: O-cookie d=000000006dbb735b{9p.inode} n=000000004e27c347
	[  +0.001053] FS-Cache: O-key=[8] '95385c0100000000'
	[  +0.000697] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.001009] FS-Cache: N-cookie d=000000006dbb735b{9p.inode} n=00000000dae70b24
	[  +0.001043] FS-Cache: N-key=[8] '95385c0100000000'
	[May20 11:04] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [55da4e11e773] <==
	raft2024/05/20 11:12:23 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/05/20 11:12:23 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-05-20 11:12:23.523592 I | etcdserver: published {Name:old-k8s-version-879853 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-05-20 11:12:23.523857 I | embed: ready to serve client requests
	2024-05-20 11:12:23.525886 I | embed: serving client requests on 127.0.0.1:2379
	2024-05-20 11:12:23.526102 I | etcdserver: setting up the initial cluster version to 3.4
	2024-05-20 11:12:23.526497 I | embed: ready to serve client requests
	2024-05-20 11:12:23.531390 I | embed: serving client requests on 192.168.85.2:2379
	2024-05-20 11:12:23.549149 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-05-20 11:12:23.550174 I | etcdserver/api: enabled capabilities for version 3.4
	2024-05-20 11:12:39.700542 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:12:41.716510 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:12:51.719872 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:13:01.716535 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:13:11.716506 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:13:21.716493 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:13:31.716452 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:13:41.716490 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:13:51.716528 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:14:01.716613 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:14:11.725698 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:14:21.716461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:14:27.191166 N | pkg/osutil: received terminated signal, shutting down...
	WARNING: 2024/05/20 11:14:27 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	2024-05-20 11:14:27.241313 I | etcdserver: skipped leadership transfer for single voting member cluster
	
	
	==> etcd [730cfcbb3b45] <==
	2024-05-20 11:16:42.888062 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:16:52.888178 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:17:02.887940 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:17:12.888122 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:17:22.887952 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:17:32.888047 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:17:42.887914 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:17:52.887926 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:18:02.888080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:18:12.887970 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:18:22.887976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:18:32.887985 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:18:42.888080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:18:52.888110 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:19:02.887915 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:19:12.887843 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:19:22.887955 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:19:32.888125 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:19:42.888020 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:19:52.888107 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:20:02.888018 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:20:12.887975 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:20:22.887956 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:20:32.888112 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-05-20 11:20:42.888120 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:20:52 up  1:02,  0 users,  load average: 1.75, 2.19, 2.71
	Linux old-k8s-version-879853 5.15.0-1061-aws #67~20.04.1-Ubuntu SMP Wed Apr 17 15:09:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [40bd562563eb] <==
	I0520 11:17:28.990192       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:17:28.990201       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0520 11:18:01.492930       1 client.go:360] parsed scheme: "passthrough"
	I0520 11:18:01.492977       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:18:01.492986       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0520 11:18:05.273775       1 handler_proxy.go:102] no RequestInfo found in the context
	E0520 11:18:05.273858       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:18:05.273867       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0520 11:18:42.645188       1 client.go:360] parsed scheme: "passthrough"
	I0520 11:18:42.645232       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:18:42.645240       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0520 11:19:27.622769       1 client.go:360] parsed scheme: "passthrough"
	I0520 11:19:27.622816       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:19:27.622826       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0520 11:19:58.182162       1 client.go:360] parsed scheme: "passthrough"
	I0520 11:19:58.182224       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:19:58.182233       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0520 11:20:03.010307       1 handler_proxy.go:102] no RequestInfo found in the context
	E0520 11:20:03.010562       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0520 11:20:03.010580       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0520 11:20:39.143116       1 client.go:360] parsed scheme: "passthrough"
	I0520 11:20:39.143459       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0520 11:20:39.143616       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [ac483fa044f2] <==
	W0520 11:14:27.230875       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.230918       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.230957       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.230991       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.231039       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.231075       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.231108       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.231142       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.231191       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.231261       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.231300       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.231361       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.231404       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.231450       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.231490       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.232254       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.232305       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.232353       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.232406       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.232478       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.232525       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.232583       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.232635       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.232707       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0520 11:14:27.232760       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	
	==> kube-controller-manager [33738d35cca5] <==
	I0520 11:12:50.541689       1 disruption.go:339] Sending events to api server.
	I0520 11:12:50.553562       1 shared_informer.go:247] Caches are synced for resource quota 
	I0520 11:12:50.565303       1 shared_informer.go:247] Caches are synced for resource quota 
	I0520 11:12:50.727230       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0520 11:12:50.927453       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0520 11:12:51.013445       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0520 11:12:51.013505       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0520 11:12:52.430393       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0520 11:12:52.439699       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-zgmvp"
	I0520 11:14:25.868431       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0520 11:14:26.136345       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	E0520 11:14:26.161595       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0520 11:14:27.085323       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-24f8f"
	W0520 11:14:27.140776       1 endpointslice_controller.go:284] Error syncing endpoint slices for service "kube-system/metrics-server", retrying. Error: failed to update metrics-server-svcvc EndpointSlice for Service kube-system/metrics-server: Put "https://192.168.85.2:8443/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/metrics-server-svcvc": unexpected EOF
	E0520 11:14:27.141148       1 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"metrics-server-9975d5f86.17d12e2472cf155d", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-9975d5f86", UID:"3539e37f-b0e4-4041-bac3-1997139a51f6", APIVersion:"apps/v1", ResourceVersion:"558", FieldPath:""}, Reason:"SuccessfulCreate", Message:"Created pod: metrics-server-9975d5f86-24f8f", Source:v1.EventSource{Component:"replicaset-controller", Host:
""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc18aea04c50d175d, ext:123840308426, loc:(*time.Location)(0x632eb80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc18aea04c50d175d, ext:123840308426, loc:(*time.Location)(0x632eb80)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://192.168.85.2:8443/api/v1/namespaces/kube-system/events": unexpected EOF'(may retry after sleeping)
	I0520 11:14:27.141491       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service kube-system/metrics-server: failed to update metrics-server-svcvc EndpointSlice for Service kube-system/metrics-server: Put \"https://192.168.85.2:8443/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/metrics-server-svcvc\": unexpected EOF"
	E0520 11:14:27.142142       1 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"metrics-server.17d12e247625d139", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"metrics-server", UID:"d8b0438a-3bfc-4c1a-b845-beb3404b7835", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}, Reason:"FailedToUpdateEndpointSlices", Message:"Error updating Endpoint Slices for Service kube-system/metrics-server: failed to update metrics-server-svcvc EndpointSlice f
or Service kube-system/metrics-server: Put \"https://192.168.85.2:8443/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/metrics-server-svcvc\": unexpected EOF", Source:v1.EventSource{Component:"endpoint-slice-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc18aea04c863d339, ext:123896324270, loc:(*time.Location)(0x632eb80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc18aea04c863d339, ext:123896324270, loc:(*time.Location)(0x632eb80)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://192.168.85.2:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.85.2:8443: connect: connection refused'(may retry after sleeping)
	E0520 11:14:27.142494       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.85.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.85.2:8443: connect: connection refused
	E0520 11:14:27.155237       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.85.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.85.2:8443: connect: connection refused
	E0520 11:14:27.155953       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.85.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.85.2:8443: connect: connection refused
	E0520 11:14:27.159174       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.85.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.85.2:8443: connect: connection refused
	E0520 11:14:27.170229       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.85.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.85.2:8443: connect: connection refused
	E0520 11:14:27.259208       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with Get "https://192.168.85.2:8443/apis/apps/v1/namespaces/kube-system/replicasets/metrics-server-9975d5f86": dial tcp 192.168.85.2:8443: connect: connection refused
	W0520 11:14:27.286436       1 endpointslice_controller.go:284] Error syncing endpoint slices for service "kube-system/metrics-server", retrying. Error: failed to update metrics-server-svcvc EndpointSlice for Service kube-system/metrics-server: Put "https://192.168.85.2:8443/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/metrics-server-svcvc": dial tcp 192.168.85.2:8443: connect: connection refused
	I0520 11:14:27.286526       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service kube-system/metrics-server: failed to update metrics-server-svcvc EndpointSlice for Service kube-system/metrics-server: Put \"https://192.168.85.2:8443/apis/discovery.k8s.io/v1beta1/namespaces/kube-system/endpointslices/metrics-server-svcvc\": dial tcp 192.168.85.2:8443: connect: connection refused"
	
	
	==> kube-controller-manager [c8021e0d3603] <==
	W0520 11:16:27.986769       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:16:52.278452       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:16:59.637355       1 request.go:655] Throttling request took 1.0484012s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0520 11:17:00.489986       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:17:22.780454       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:17:32.140708       1 request.go:655] Throttling request took 1.048214645s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0520 11:17:32.992172       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:17:53.325929       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:18:04.643669       1 request.go:655] Throttling request took 1.048489133s, request: GET:https://192.168.85.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W0520 11:18:05.495676       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:18:23.827770       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:18:37.146086       1 request.go:655] Throttling request took 1.048284575s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0520 11:18:37.997375       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:18:54.329667       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:19:09.647978       1 request.go:655] Throttling request took 1.048511282s, request: GET:https://192.168.85.2:8443/apis/policy/v1beta1?timeout=32s
	W0520 11:19:10.499226       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:19:24.831480       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:19:42.149570       1 request.go:655] Throttling request took 1.048282123s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0520 11:19:43.001044       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:19:55.333377       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:20:14.651439       1 request.go:655] Throttling request took 1.048442976s, request: GET:https://192.168.85.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W0520 11:20:15.502809       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0520 11:20:25.835129       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0520 11:20:47.153313       1 request.go:655] Throttling request took 1.048090296s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0520 11:20:48.005042       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [b1eddd083512] <==
	I0520 11:12:52.100942       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0520 11:12:52.101042       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0520 11:12:52.130479       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0520 11:12:52.130569       1 server_others.go:185] Using iptables Proxier.
	I0520 11:12:52.130785       1 server.go:650] Version: v1.20.0
	I0520 11:12:52.131283       1 config.go:315] Starting service config controller
	I0520 11:12:52.131292       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0520 11:12:52.133186       1 config.go:224] Starting endpoint slice config controller
	I0520 11:12:52.133197       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0520 11:12:52.231783       1 shared_informer.go:247] Caches are synced for service config 
	I0520 11:12:52.233366       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [d282e88b488b] <==
	I0520 11:15:04.896967       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0520 11:15:04.897088       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0520 11:15:04.920418       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0520 11:15:04.920513       1 server_others.go:185] Using iptables Proxier.
	I0520 11:15:04.920742       1 server.go:650] Version: v1.20.0
	I0520 11:15:04.921594       1 config.go:315] Starting service config controller
	I0520 11:15:04.921611       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0520 11:15:04.921629       1 config.go:224] Starting endpoint slice config controller
	I0520 11:15:04.921632       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0520 11:15:05.021782       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0520 11:15:05.021882       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [d7bf0c086e3a] <==
	W0520 11:12:31.769085       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:12:31.769133       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 11:12:31.769166       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 11:12:31.899408       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:12:31.899444       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:12:31.900399       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0520 11:12:31.900661       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0520 11:12:31.914052       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:12:31.914149       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0520 11:12:31.914227       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0520 11:12:31.914291       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0520 11:12:31.914481       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0520 11:12:31.917267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0520 11:12:31.917277       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:12:31.917354       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 11:12:31.917456       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0520 11:12:31.917528       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0520 11:12:31.917587       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0520 11:12:31.921376       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0520 11:12:32.729541       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0520 11:12:32.838669       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0520 11:12:32.846000       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0520 11:12:32.915942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0520 11:12:33.501160       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0520 11:14:27.167523       1 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"metrics-server-9975d5f86-24f8f.17d12e2475c1226d", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"metrics-server-9975d5f86-24f8f", UID:"04742115-71c0-4684-a740-4ad06dbbab20", APIVersion:"v1", ResourceVersion:"581", FieldPath:""}, Reason:"Scheduled", Message:"Successfully assigned kube-system/metrics-server-9975d5f86-24f8f to old-k8s-version-879853", Source:v1.EventSource{
Component:"default-scheduler", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc18aea04c7ff246d, ext:123790874037, loc:(*time.Location)(0x25fc580)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc18aea04c7ff246d, ext:123790874037, loc:(*time.Location)(0x25fc580)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://192.168.85.2:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.85.2:8443: connect: connection refused'(may retry after sleeping)
	
	
	==> kube-scheduler [e881ad64ded6] <==
	I0520 11:14:56.255795       1 serving.go:331] Generated self-signed cert in-memory
	W0520 11:15:01.838998       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0520 11:15:01.839794       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0520 11:15:01.839843       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0520 11:15:01.839849       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0520 11:15:02.071621       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:15:02.071659       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0520 11:15:02.082492       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0520 11:15:02.082579       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0520 11:15:02.178645       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	May 20 11:18:34 old-k8s-version-879853 kubelet[1232]: E0520 11:18:34.227935    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 20 11:18:40 old-k8s-version-879853 kubelet[1232]: E0520 11:18:40.228209    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:18:49 old-k8s-version-879853 kubelet[1232]: E0520 11:18:49.235833    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 20 11:18:51 old-k8s-version-879853 kubelet[1232]: E0520 11:18:51.228282    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:19:03 old-k8s-version-879853 kubelet[1232]: E0520 11:19:03.230228    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 20 11:19:06 old-k8s-version-879853 kubelet[1232]: E0520 11:19:06.228367    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:19:15 old-k8s-version-879853 kubelet[1232]: E0520 11:19:15.232306    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 20 11:19:19 old-k8s-version-879853 kubelet[1232]: E0520 11:19:19.228241    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:19:26 old-k8s-version-879853 kubelet[1232]: E0520 11:19:26.228031    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 20 11:19:30 old-k8s-version-879853 kubelet[1232]: E0520 11:19:30.228171    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:19:40 old-k8s-version-879853 kubelet[1232]: E0520 11:19:40.228690    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 20 11:19:41 old-k8s-version-879853 kubelet[1232]: E0520 11:19:41.228364    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:19:53 old-k8s-version-879853 kubelet[1232]: E0520 11:19:53.236437    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 20 11:19:56 old-k8s-version-879853 kubelet[1232]: E0520 11:19:56.228010    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.228563    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 20 11:20:08 old-k8s-version-879853 kubelet[1232]: E0520 11:20:08.229095    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:20:21 old-k8s-version-879853 kubelet[1232]: E0520 11:20:21.228588    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 20 11:20:23 old-k8s-version-879853 kubelet[1232]: E0520 11:20:23.230344    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:20:32 old-k8s-version-879853 kubelet[1232]: E0520 11:20:32.228101    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 20 11:20:38 old-k8s-version-879853 kubelet[1232]: E0520 11:20:38.228264    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	May 20 11:20:43 old-k8s-version-879853 kubelet[1232]: E0520 11:20:43.227932    1232 pod_workers.go:191] Error syncing pod 04e06e00-c9c2-4076-8e12-8abaee177786 ("dashboard-metrics-scraper-8d5bb5db8-cznvg_kubernetes-dashboard(04e06e00-c9c2-4076-8e12-8abaee177786)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
	May 20 11:20:49 old-k8s-version-879853 kubelet[1232]: E0520 11:20:49.264362    1232 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	May 20 11:20:49 old-k8s-version-879853 kubelet[1232]: E0520 11:20:49.264399    1232 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	May 20 11:20:49 old-k8s-version-879853 kubelet[1232]: E0520 11:20:49.264627    1232 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-4g8dg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-24f8f_kube-system(047421
15-71c0-4684-a740-4ad06dbbab20): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	May 20 11:20:49 old-k8s-version-879853 kubelet[1232]: E0520 11:20:49.264688    1232 pod_workers.go:191] Error syncing pod 04742115-71c0-4684-a740-4ad06dbbab20 ("metrics-server-9975d5f86-24f8f_kube-system(04742115-71c0-4684-a740-4ad06dbbab20)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [1963b29e2abd] <==
	2024/05/20 11:15:27 Using namespace: kubernetes-dashboard
	2024/05/20 11:15:27 Using in-cluster config to connect to apiserver
	2024/05/20 11:15:27 Using secret token for csrf signing
	2024/05/20 11:15:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/05/20 11:15:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/05/20 11:15:27 Successful initial request to the apiserver, version: v1.20.0
	2024/05/20 11:15:27 Generating JWE encryption key
	2024/05/20 11:15:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/05/20 11:15:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/05/20 11:15:28 Initializing JWE encryption key from synchronized object
	2024/05/20 11:15:28 Creating in-cluster Sidecar client
	2024/05/20 11:15:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:15:28 Serving insecurely on HTTP port: 9090
	2024/05/20 11:15:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:16:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:16:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:17:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:17:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:18:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:18:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:19:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:19:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:20:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/05/20 11:15:27 Starting overwatch
	
	
	==> storage-provisioner [2b538e03219d] <==
	I0520 11:15:45.557633       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0520 11:15:45.610748       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0520 11:15:45.617295       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0520 11:16:03.119575       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0520 11:16:03.119929       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-879853_d5228072-6e8a-4918-997b-565dfc9b87aa!
	I0520 11:16:03.120121       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99ea0f77-785b-44cb-b1c7-db2ab174a995", APIVersion:"v1", ResourceVersion:"804", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-879853_d5228072-6e8a-4918-997b-565dfc9b87aa became leader
	I0520 11:16:03.220997       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-879853_d5228072-6e8a-4918-997b-565dfc9b87aa!
	
	
	==> storage-provisioner [7d536cdf43d4] <==
	I0520 11:15:04.582624       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0520 11:15:34.584674       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-879853 -n old-k8s-version-879853
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-879853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-24f8f dashboard-metrics-scraper-8d5bb5db8-cznvg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-879853 describe pod metrics-server-9975d5f86-24f8f dashboard-metrics-scraper-8d5bb5db8-cznvg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-879853 describe pod metrics-server-9975d5f86-24f8f dashboard-metrics-scraper-8d5bb5db8-cznvg: exit status 1 (101.597997ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-24f8f" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-cznvg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-879853 describe pod metrics-server-9975d5f86-24f8f dashboard-metrics-scraper-8d5bb5db8-cznvg: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (376.06s)

                                                
                                    

Test pass (316/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.79
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.1/json-events 6.42
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.06
18 TestDownloadOnly/v1.30.1/DeleteAll 0.19
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.52
22 TestOffline 99.26
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 149.21
29 TestAddons/parallel/Registry 17.65
31 TestAddons/parallel/InspektorGadget 10.79
32 TestAddons/parallel/MetricsServer 6.86
35 TestAddons/parallel/CSI 57.53
36 TestAddons/parallel/Headlamp 12.01
37 TestAddons/parallel/CloudSpanner 6.5
38 TestAddons/parallel/LocalPath 51.38
39 TestAddons/parallel/NvidiaDevicePlugin 5.41
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.16
44 TestAddons/StoppedEnableDisable 11.15
45 TestCertOptions 35.4
46 TestCertExpiration 247.01
47 TestDockerFlags 42.34
48 TestForceSystemdFlag 40.08
49 TestForceSystemdEnv 40.55
55 TestErrorSpam/setup 31.69
56 TestErrorSpam/start 0.73
57 TestErrorSpam/status 0.95
58 TestErrorSpam/pause 1.32
59 TestErrorSpam/unpause 1.42
60 TestErrorSpam/stop 2.19
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 84.69
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 29.3
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.11
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.69
72 TestFunctional/serial/CacheCmd/cache/add_local 1
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.5
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.13
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 40.67
81 TestFunctional/serial/ComponentHealth 0.11
82 TestFunctional/serial/LogsCmd 1.21
83 TestFunctional/serial/LogsFileCmd 1.08
84 TestFunctional/serial/InvalidService 5.25
86 TestFunctional/parallel/ConfigCmd 0.37
87 TestFunctional/parallel/DashboardCmd 9.9
88 TestFunctional/parallel/DryRun 0.59
89 TestFunctional/parallel/InternationalLanguage 0.28
90 TestFunctional/parallel/StatusCmd 1.3
94 TestFunctional/parallel/ServiceCmdConnect 11.69
95 TestFunctional/parallel/AddonsCmd 0.17
96 TestFunctional/parallel/PersistentVolumeClaim 27.17
98 TestFunctional/parallel/SSHCmd 0.77
99 TestFunctional/parallel/CpCmd 1.99
101 TestFunctional/parallel/FileSync 0.29
102 TestFunctional/parallel/CertSync 2.11
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
110 TestFunctional/parallel/License 0.31
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
113 TestFunctional/parallel/Version/short 0.06
114 TestFunctional/parallel/Version/components 0.69
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
119 TestFunctional/parallel/ImageCommands/ImageBuild 3.39
120 TestFunctional/parallel/ImageCommands/Setup 2.53
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.33
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.67
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.8
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.88
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
133 TestFunctional/parallel/DockerEnv/bash 1.27
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.1
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.4
140 TestFunctional/parallel/MountCmd/any-port 7.05
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.15
142 TestFunctional/parallel/MountCmd/specific-port 2.05
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.59
144 TestFunctional/parallel/ServiceCmd/DeployApp 6.27
145 TestFunctional/parallel/ServiceCmd/List 0.58
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
148 TestFunctional/parallel/ProfileCmd/profile_list 0.43
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
150 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
151 TestFunctional/parallel/ServiceCmd/Format 0.52
152 TestFunctional/parallel/ServiceCmd/URL 0.51
153 TestFunctional/delete_addon-resizer_images 0.08
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 132.96
160 TestMultiControlPlane/serial/DeployApp 45.33
161 TestMultiControlPlane/serial/PingHostFromPods 1.61
162 TestMultiControlPlane/serial/AddWorkerNode 27.6
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.78
165 TestMultiControlPlane/serial/CopyFile 18.72
166 TestMultiControlPlane/serial/StopSecondaryNode 11.68
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.63
168 TestMultiControlPlane/serial/RestartSecondaryNode 58.36
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.85
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 254.56
171 TestMultiControlPlane/serial/DeleteSecondaryNode 12.39
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
173 TestMultiControlPlane/serial/StopCluster 32.78
174 TestMultiControlPlane/serial/RestartCluster 87.03
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
176 TestMultiControlPlane/serial/AddSecondaryNode 45.49
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.73
180 TestImageBuild/serial/Setup 33.07
181 TestImageBuild/serial/NormalBuild 1.83
182 TestImageBuild/serial/BuildWithBuildArg 0.87
183 TestImageBuild/serial/BuildWithDockerIgnore 0.7
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.7
188 TestJSONOutput/start/Command 83.4
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.58
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.5
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.9
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.2
213 TestKicCustomNetwork/create_custom_network 31.04
214 TestKicCustomNetwork/use_default_bridge_network 36.09
215 TestKicExistingNetwork 36.48
216 TestKicCustomSubnet 33.56
217 TestKicStaticIP 34.41
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 71.83
222 TestMountStart/serial/StartWithMountFirst 7.84
223 TestMountStart/serial/VerifyMountFirst 0.26
224 TestMountStart/serial/StartWithMountSecond 7.36
225 TestMountStart/serial/VerifyMountSecond 0.25
226 TestMountStart/serial/DeleteFirst 1.45
227 TestMountStart/serial/VerifyMountPostDelete 0.25
228 TestMountStart/serial/Stop 1.19
229 TestMountStart/serial/RestartStopped 8.25
230 TestMountStart/serial/VerifyMountPostStop 0.25
233 TestMultiNode/serial/FreshStart2Nodes 65.26
234 TestMultiNode/serial/DeployApp2Nodes 45.85
235 TestMultiNode/serial/PingHostFrom2Pods 0.99
236 TestMultiNode/serial/AddNode 17.87
237 TestMultiNode/serial/MultiNodeLabels 0.11
238 TestMultiNode/serial/ProfileList 0.4
239 TestMultiNode/serial/CopyFile 9.54
240 TestMultiNode/serial/StopNode 2.19
241 TestMultiNode/serial/StartAfterStop 10.96
242 TestMultiNode/serial/RestartKeepsNodes 116.65
243 TestMultiNode/serial/DeleteNode 5.81
244 TestMultiNode/serial/StopMultiNode 21.65
245 TestMultiNode/serial/RestartMultiNode 56.03
246 TestMultiNode/serial/ValidateNameConflict 38.37
251 TestPreload 104.36
253 TestScheduledStopUnix 106.23
254 TestSkaffold 117.11
256 TestInsufficientStorage 10.61
257 TestRunningBinaryUpgrade 80.59
259 TestKubernetesUpgrade 371.18
260 TestMissingContainerUpgrade 148.74
262 TestPause/serial/Start 94.99
263 TestPause/serial/SecondStartNoReconfiguration 34.63
264 TestPause/serial/Pause 0.9
265 TestPause/serial/VerifyStatus 0.49
266 TestPause/serial/Unpause 0.64
267 TestPause/serial/PauseAgain 1.09
268 TestPause/serial/DeletePaused 2.4
269 TestPause/serial/VerifyDeletedResources 0.16
270 TestStoppedBinaryUpgrade/Setup 1.07
271 TestStoppedBinaryUpgrade/Upgrade 79.12
272 TestStoppedBinaryUpgrade/MinikubeLogs 1.43
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
282 TestNoKubernetes/serial/StartWithK8s 39.52
283 TestNoKubernetes/serial/StartWithStopK8s 17.05
295 TestNoKubernetes/serial/Start 10.34
296 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
297 TestNoKubernetes/serial/ProfileList 0.78
298 TestNoKubernetes/serial/Stop 1.49
299 TestNoKubernetes/serial/StartNoArgs 8.04
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
302 TestStartStop/group/old-k8s-version/serial/FirstStart 144.87
303 TestStartStop/group/old-k8s-version/serial/DeployApp 10.21
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 53.4
306 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.71
307 TestStartStop/group/old-k8s-version/serial/Stop 11.36
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
310 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.38
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.32
312 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.24
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
314 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 289.29
315 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
317 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
318 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.84
320 TestStartStop/group/embed-certs/serial/FirstStart 88.98
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.16
323 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
324 TestStartStop/group/old-k8s-version/serial/Pause 3.73
326 TestStartStop/group/no-preload/serial/FirstStart 58.82
327 TestStartStop/group/embed-certs/serial/DeployApp 9.5
328 TestStartStop/group/no-preload/serial/DeployApp 7.52
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.31
330 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.47
331 TestStartStop/group/embed-certs/serial/Stop 11.01
332 TestStartStop/group/no-preload/serial/Stop 11.35
333 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
334 TestStartStop/group/embed-certs/serial/SecondStart 269.74
335 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
336 TestStartStop/group/no-preload/serial/SecondStart 272.57
337 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
339 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
340 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
341 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
342 TestStartStop/group/embed-certs/serial/Pause 2.76
343 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
344 TestStartStop/group/no-preload/serial/Pause 3.54
346 TestStartStop/group/newest-cni/serial/FirstStart 56.93
347 TestNetworkPlugins/group/auto/Start 55.83
348 TestStartStop/group/newest-cni/serial/DeployApp 0
349 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.29
350 TestStartStop/group/newest-cni/serial/Stop 6.06
351 TestNetworkPlugins/group/auto/KubeletFlags 0.31
352 TestNetworkPlugins/group/auto/NetCatPod 11.38
353 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
354 TestStartStop/group/newest-cni/serial/SecondStart 19.19
355 TestNetworkPlugins/group/auto/DNS 0.31
356 TestNetworkPlugins/group/auto/Localhost 0.32
357 TestNetworkPlugins/group/auto/HairPin 0.35
358 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
359 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
360 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
361 TestStartStop/group/newest-cni/serial/Pause 3.79
362 TestNetworkPlugins/group/kindnet/Start 71.88
363 TestNetworkPlugins/group/calico/Start 83.8
364 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
365 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
366 TestNetworkPlugins/group/kindnet/NetCatPod 10.46
367 TestNetworkPlugins/group/kindnet/DNS 0.27
368 TestNetworkPlugins/group/kindnet/Localhost 0.18
369 TestNetworkPlugins/group/kindnet/HairPin 0.17
370 TestNetworkPlugins/group/calico/ControllerPod 6.01
371 TestNetworkPlugins/group/calico/KubeletFlags 0.38
372 TestNetworkPlugins/group/calico/NetCatPod 12.45
373 TestNetworkPlugins/group/custom-flannel/Start 69.49
374 TestNetworkPlugins/group/calico/DNS 0.24
375 TestNetworkPlugins/group/calico/Localhost 0.23
376 TestNetworkPlugins/group/calico/HairPin 0.24
377 TestNetworkPlugins/group/false/Start 52.48
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.33
380 TestNetworkPlugins/group/custom-flannel/DNS 0.21
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
383 TestNetworkPlugins/group/false/KubeletFlags 0.39
384 TestNetworkPlugins/group/false/NetCatPod 12.51
385 TestNetworkPlugins/group/false/DNS 0.26
386 TestNetworkPlugins/group/false/Localhost 0.24
387 TestNetworkPlugins/group/false/HairPin 0.23
388 TestNetworkPlugins/group/enable-default-cni/Start 56.68
389 TestNetworkPlugins/group/flannel/Start 66.52
390 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
391 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.32
392 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
393 TestNetworkPlugins/group/enable-default-cni/Localhost 0.22
394 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
395 TestNetworkPlugins/group/flannel/ControllerPod 6.01
396 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
397 TestNetworkPlugins/group/flannel/NetCatPod 13.3
398 TestNetworkPlugins/group/bridge/Start 59.05
399 TestNetworkPlugins/group/flannel/DNS 0.32
400 TestNetworkPlugins/group/flannel/Localhost 0.2
401 TestNetworkPlugins/group/flannel/HairPin 0.24
402 TestNetworkPlugins/group/kubenet/Start 53.78
403 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
404 TestNetworkPlugins/group/bridge/NetCatPod 11.42
405 TestNetworkPlugins/group/bridge/DNS 0.31
406 TestNetworkPlugins/group/bridge/Localhost 0.23
407 TestNetworkPlugins/group/bridge/HairPin 0.21
408 TestNetworkPlugins/group/kubenet/KubeletFlags 0.43
409 TestNetworkPlugins/group/kubenet/NetCatPod 11.34
410 TestNetworkPlugins/group/kubenet/DNS 0.19
411 TestNetworkPlugins/group/kubenet/Localhost 0.16
412 TestNetworkPlugins/group/kubenet/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (7.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-479514 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-479514 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.785895073s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-479514
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-479514: exit status 85 (74.791412ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-479514 | jenkins | v1.33.1 | 20 May 24 10:20 UTC |          |
	|         | -p download-only-479514        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:20:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:20:04.701844    7518 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:20:04.702008    7518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:20:04.702019    7518 out.go:304] Setting ErrFile to fd 2...
	I0520 10:20:04.702024    7518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:20:04.702280    7518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
	W0520 10:20:04.702430    7518 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18925-2151/.minikube/config/config.json: open /home/jenkins/minikube-integration/18925-2151/.minikube/config/config.json: no such file or directory
	I0520 10:20:04.702866    7518 out.go:298] Setting JSON to true
	I0520 10:20:04.703699    7518 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":125,"bootTime":1716200280,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1061-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0520 10:20:04.703774    7518 start.go:139] virtualization:  
	I0520 10:20:04.706571    7518 out.go:97] [download-only-479514] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 10:20:04.708900    7518 out.go:169] MINIKUBE_LOCATION=18925
	W0520 10:20:04.706722    7518 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball: no such file or directory
	I0520 10:20:04.706775    7518 notify.go:220] Checking for updates...
	I0520 10:20:04.712858    7518 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:20:04.714497    7518 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	I0520 10:20:04.716369    7518 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	I0520 10:20:04.718252    7518 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0520 10:20:04.721971    7518 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 10:20:04.722235    7518 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:20:04.742088    7518 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 10:20:04.742191    7518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:20:05.094265    7518 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-20 10:20:05.084155543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:20:05.094399    7518 docker.go:295] overlay module found
	I0520 10:20:05.096325    7518 out.go:97] Using the docker driver based on user configuration
	I0520 10:20:05.096355    7518 start.go:297] selected driver: docker
	I0520 10:20:05.096362    7518 start.go:901] validating driver "docker" against <nil>
	I0520 10:20:05.096472    7518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:20:05.156370    7518 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-05-20 10:20:05.147717834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:20:05.156539    7518 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:20:05.156844    7518 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0520 10:20:05.157111    7518 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 10:20:05.159545    7518 out.go:169] Using Docker driver with root privileges
	I0520 10:20:05.161604    7518 cni.go:84] Creating CNI manager for ""
	I0520 10:20:05.161631    7518 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0520 10:20:05.161719    7518 start.go:340] cluster config:
	{Name:download-only-479514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-479514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:20:05.164031    7518 out.go:97] Starting "download-only-479514" primary control-plane node in "download-only-479514" cluster
	I0520 10:20:05.164068    7518 cache.go:121] Beginning downloading kic base image for docker with docker
	I0520 10:20:05.166026    7518 out.go:97] Pulling base image v0.0.44-1715707529-18887 ...
	I0520 10:20:05.166060    7518 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 10:20:05.166215    7518 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0520 10:20:05.183427    7518 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0520 10:20:05.183616    7518 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory
	I0520 10:20:05.183726    7518 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0520 10:20:05.227346    7518 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0520 10:20:05.227374    7518 cache.go:56] Caching tarball of preloaded images
	I0520 10:20:05.227547    7518 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0520 10:20:05.229936    7518 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0520 10:20:05.229963    7518 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0520 10:20:05.334048    7518 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-479514 host does not exist
	  To start a cluster, run: "minikube start -p download-only-479514"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-479514
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (6.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-572511 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-572511 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.416726155s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (6.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-572511
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-572511: exit status 85 (64.415687ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-479514 | jenkins | v1.33.1 | 20 May 24 10:20 UTC |                     |
	|         | -p download-only-479514        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 20 May 24 10:20 UTC | 20 May 24 10:20 UTC |
	| delete  | -p download-only-479514        | download-only-479514 | jenkins | v1.33.1 | 20 May 24 10:20 UTC | 20 May 24 10:20 UTC |
	| start   | -o=json --download-only        | download-only-572511 | jenkins | v1.33.1 | 20 May 24 10:20 UTC |                     |
	|         | -p download-only-572511        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/20 10:20:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0520 10:20:12.873973    7685 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:20:12.874138    7685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:20:12.874164    7685 out.go:304] Setting ErrFile to fd 2...
	I0520 10:20:12.874186    7685 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:20:12.874447    7685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
	I0520 10:20:12.874883    7685 out.go:298] Setting JSON to true
	I0520 10:20:12.875604    7685 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":133,"bootTime":1716200280,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1061-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0520 10:20:12.875694    7685 start.go:139] virtualization:  
	I0520 10:20:12.877929    7685 out.go:97] [download-only-572511] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 10:20:12.880275    7685 out.go:169] MINIKUBE_LOCATION=18925
	I0520 10:20:12.878109    7685 notify.go:220] Checking for updates...
	I0520 10:20:12.882244    7685 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:20:12.884176    7685 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	I0520 10:20:12.885717    7685 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	I0520 10:20:12.887533    7685 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0520 10:20:12.891209    7685 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0520 10:20:12.891469    7685 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:20:12.910082    7685 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 10:20:12.910186    7685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:20:12.980313    7685 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-05-20 10:20:12.970900676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:20:12.980429    7685 docker.go:295] overlay module found
	I0520 10:20:12.982264    7685 out.go:97] Using the docker driver based on user configuration
	I0520 10:20:12.982295    7685 start.go:297] selected driver: docker
	I0520 10:20:12.982301    7685 start.go:901] validating driver "docker" against <nil>
	I0520 10:20:12.982415    7685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:20:13.034392    7685 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:46 SystemTime:2024-05-20 10:20:13.025512688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:20:13.034550    7685 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0520 10:20:13.034845    7685 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0520 10:20:13.035007    7685 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0520 10:20:13.037202    7685 out.go:169] Using Docker driver with root privileges
	I0520 10:20:13.038857    7685 cni.go:84] Creating CNI manager for ""
	I0520 10:20:13.038888    7685 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0520 10:20:13.038899    7685 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0520 10:20:13.038981    7685 start.go:340] cluster config:
	{Name:download-only-572511 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-572511 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:20:13.041015    7685 out.go:97] Starting "download-only-572511" primary control-plane node in "download-only-572511" cluster
	I0520 10:20:13.041032    7685 cache.go:121] Beginning downloading kic base image for docker with docker
	I0520 10:20:13.042954    7685 out.go:97] Pulling base image v0.0.44-1715707529-18887 ...
	I0520 10:20:13.042980    7685 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 10:20:13.043025    7685 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0520 10:20:13.056747    7685 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0520 10:20:13.056880    7685 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory
	I0520 10:20:13.056900    7685 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory, skipping pull
	I0520 10:20:13.056905    7685 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in cache, skipping pull
	I0520 10:20:13.056912    7685 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a as a tarball
	I0520 10:20:13.095646    7685 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 10:20:13.095672    7685 cache.go:56] Caching tarball of preloaded images
	I0520 10:20:13.095846    7685 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0520 10:20:13.097803    7685 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0520 10:20:13.097837    7685 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0520 10:20:13.199097    7685 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4?checksum=md5:7ffd0655905ace939b15286e37914582 -> /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4
	I0520 10:20:17.699428    7685 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	I0520 10:20:17.699541    7685 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18925-2151/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-572511 host does not exist
	  To start a cluster, run: "minikube start -p download-only-572511"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-572511
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.52s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-981870 --alsologtostderr --binary-mirror http://127.0.0.1:34277 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-981870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-981870
--- PASS: TestBinaryMirror (0.52s)

                                                
                                    
x
+
TestOffline (99.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-270920 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-270920 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m37.045758033s)
helpers_test.go:175: Cleaning up "offline-docker-270920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-270920
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-270920: (2.209702837s)
--- PASS: TestOffline (99.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-988376
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-988376: exit status 85 (63.382279ms)

                                                
                                                
-- stdout --
	* Profile "addons-988376" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-988376"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-988376
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-988376: exit status 85 (76.845661ms)

                                                
                                                
-- stdout --
	* Profile "addons-988376" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-988376"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (149.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-988376 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-988376 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m29.208835178s)
--- PASS: TestAddons/Setup (149.21s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 41.152313ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-tbqsk" [6f6cb288-6543-4600-9365-2eddbbfb91ea] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005765896s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-pbt6j" [a0627700-0f62-473d-9b8b-54789b3fdc5e] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005244141s
addons_test.go:340: (dbg) Run:  kubectl --context addons-988376 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-988376 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-988376 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.712684409s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 ip
2024/05/20 10:23:07 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-d4kdp" [b50c54e7-f1db-467c-bdd7-32334acf6c35] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004563055s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-988376
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-988376: (5.781228387s)
--- PASS: TestAddons/parallel/InspektorGadget (10.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 2.876735ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-nrn8r" [97655ebe-e88e-4796-9095-eb1b95ded83d] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00387432s
addons_test.go:415: (dbg) Run:  kubectl --context addons-988376 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 30.998394ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-988376 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-988376 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [74d97fd0-02a5-434d-8c70-c7c3acbff5af] Pending
helpers_test.go:344: "task-pv-pod" [74d97fd0-02a5-434d-8c70-c7c3acbff5af] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [74d97fd0-02a5-434d-8c70-c7c3acbff5af] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003818398s
addons_test.go:584: (dbg) Run:  kubectl --context addons-988376 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-988376 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-988376 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-988376 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-988376 delete pod task-pv-pod: (1.320454416s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-988376 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-988376 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-988376 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [81c51862-149b-4542-ade2-d853e4636665] Pending
helpers_test.go:344: "task-pv-pod-restore" [81c51862-149b-4542-ade2-d853e4636665] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [81c51862-149b-4542-ade2-d853e4636665] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003395689s
addons_test.go:626: (dbg) Run:  kubectl --context addons-988376 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-988376 delete pod task-pv-pod-restore: (1.033544495s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-988376 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-988376 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-988376 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.71974135s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.53s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-988376 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-988376 --alsologtostderr -v=1: (1.002343086s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-4grst" [6ed92ee0-6351-4b0b-aaa2-25efcac1abf6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-4grst" [6ed92ee0-6351-4b0b-aaa2-25efcac1abf6] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-4grst" [6ed92ee0-6351-4b0b-aaa2-25efcac1abf6] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003353573s
--- PASS: TestAddons/parallel/Headlamp (12.01s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-xhvzv" [7ed72fc6-68bb-4432-9695-5792fa9e73e4] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003152746s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-988376
--- PASS: TestAddons/parallel/CloudSpanner (6.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.38s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-988376 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-988376 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-988376 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7cc4f6e7-a745-4705-8092-f7319cb92cfe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7cc4f6e7-a745-4705-8092-f7319cb92cfe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7cc4f6e7-a745-4705-8092-f7319cb92cfe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004174227s
addons_test.go:891: (dbg) Run:  kubectl --context addons-988376 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 ssh "cat /opt/local-path-provisioner/pvc-1d3d24c8-8add-46c7-93b5-a621b104aabf_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-988376 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-988376 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-988376 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-988376 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.211761784s)
--- PASS: TestAddons/parallel/LocalPath (51.38s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-r7h5n" [27fcc97d-9dd2-482d-9f4c-21e0b23cca10] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004579041s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-988376
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-s8qxl" [d2af72fd-68fb-483f-a16f-27c420fabf1d] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004117135s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-988376 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-988376 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.15s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-988376
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-988376: (10.89736409s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-988376
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-988376
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-988376
--- PASS: TestAddons/StoppedEnableDisable (11.15s)

                                                
                                    
x
+
TestCertOptions (35.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-156777 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0520 11:11:22.581781    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-156777 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (32.57028287s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-156777 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-156777 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-156777 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-156777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-156777
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-156777: (2.207385552s)
--- PASS: TestCertOptions (35.40s)

                                                
                                    
x
+
TestCertExpiration (247.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-748987 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-748987 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (40.421710131s)
E0520 11:10:54.891120    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-748987 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0520 11:13:53.301169    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-748987 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.396104606s)
helpers_test.go:175: Cleaning up "cert-expiration-748987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-748987
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-748987: (2.193680971s)
--- PASS: TestCertExpiration (247.01s)

                                                
                                    
x
+
TestDockerFlags (42.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-803587 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-803587 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.625586041s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-803587 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-803587 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-803587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-803587
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-803587: (2.129457509s)
--- PASS: TestDockerFlags (42.34s)

                                                
                                    
x
+
TestForceSystemdFlag (40.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-611745 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0520 11:08:38.741545    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-611745 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.641109498s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-611745 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-611745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-611745
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-611745: (2.123716202s)
--- PASS: TestForceSystemdFlag (40.08s)

                                                
                                    
x
+
TestForceSystemdEnv (40.55s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-723923 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-723923 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.923935311s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-723923 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-723923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-723923
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-723923: (2.228051396s)
--- PASS: TestForceSystemdEnv (40.55s)

                                                
                                    
x
+
TestErrorSpam/setup (31.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-261412 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-261412 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-261412 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-261412 --driver=docker  --container-runtime=docker: (31.686306267s)
--- PASS: TestErrorSpam/setup (31.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 pause
--- PASS: TestErrorSpam/pause (1.32s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (2.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 stop: (2.005978092s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-261412 --log_dir /tmp/nospam-261412 stop
--- PASS: TestErrorSpam/stop (2.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18925-2151/.minikube/files/etc/test/nested/copy/7512/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-660553 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-660553 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m24.687472164s)
--- PASS: TestFunctional/serial/StartWithProxy (84.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-660553 --alsologtostderr -v=8
E0520 10:27:50.216630    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:27:50.224378    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:27:50.234799    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:27:50.255150    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:27:50.295480    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:27:50.376134    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:27:50.536485    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:27:50.857131    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:27:51.497678    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:27:52.778311    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:27:55.339407    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-660553 --alsologtostderr -v=8: (29.304116984s)
functional_test.go:659: soft start took 29.30461975s for "functional-660553" cluster.
--- PASS: TestFunctional/serial/SoftStart (29.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-660553 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 cache add registry.k8s.io/pause:latest
E0520 10:28:00.459988    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.69s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-660553 /tmp/TestFunctionalserialCacheCmdcacheadd_local2646484161/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 cache add minikube-local-cache-test:functional-660553
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 cache delete minikube-local-cache-test:functional-660553
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-660553
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-660553 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (318.928535ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 kubectl -- --context functional-660553 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-660553 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-660553 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0520 10:28:10.700225    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:28:31.181198    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-660553 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.673310724s)
functional_test.go:757: restart took 40.673420262s for "functional-660553" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-660553 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-660553 logs: (1.209617224s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 logs --file /tmp/TestFunctionalserialLogsFileCmd3357672097/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-660553 logs --file /tmp/TestFunctionalserialLogsFileCmd3357672097/001/logs.txt: (1.08334451s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-660553 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-660553
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-660553: exit status 115 (813.325802ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31856 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-660553 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-660553 delete -f testdata/invalidsvc.yaml: (1.188612424s)
--- PASS: TestFunctional/serial/InvalidService (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-660553 config get cpus: exit status 14 (58.886413ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-660553 config get cpus: exit status 14 (65.726657ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-660553 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-660553 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 46460: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-660553 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-660553 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (254.833995ms)

                                                
                                                
-- stdout --
	* [functional-660553] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:29:40.859737   45620 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:29:40.859873   45620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:29:40.859884   45620 out.go:304] Setting ErrFile to fd 2...
	I0520 10:29:40.859890   45620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:29:40.860200   45620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
	I0520 10:29:40.860648   45620 out.go:298] Setting JSON to false
	I0520 10:29:40.861676   45620 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":701,"bootTime":1716200280,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1061-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0520 10:29:40.861749   45620 start.go:139] virtualization:  
	I0520 10:29:40.865788   45620 out.go:177] * [functional-660553] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0520 10:29:40.874306   45620 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:29:40.874409   45620 notify.go:220] Checking for updates...
	I0520 10:29:40.877320   45620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:29:40.883627   45620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	I0520 10:29:40.886116   45620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	I0520 10:29:40.888400   45620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 10:29:40.891048   45620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:29:40.894365   45620 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 10:29:40.894879   45620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:29:40.926116   45620 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 10:29:40.926239   45620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:29:41.013983   45620 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-05-20 10:29:41.003855139 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:29:41.014092   45620 docker.go:295] overlay module found
	I0520 10:29:41.016725   45620 out.go:177] * Using the docker driver based on existing profile
	I0520 10:29:41.018721   45620 start.go:297] selected driver: docker
	I0520 10:29:41.018742   45620 start.go:901] validating driver "docker" against &{Name:functional-660553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-660553 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:29:41.018874   45620 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:29:41.021326   45620 out.go:177] 
	W0520 10:29:41.023420   45620 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0520 10:29:41.025216   45620 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-660553 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-660553 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-660553 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (280.160125ms)

                                                
                                                
-- stdout --
	* [functional-660553] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:29:41.969802   45938 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:29:41.969997   45938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:29:41.970022   45938 out.go:304] Setting ErrFile to fd 2...
	I0520 10:29:41.970040   45938 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:29:41.970408   45938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
	I0520 10:29:41.970801   45938 out.go:298] Setting JSON to false
	I0520 10:29:41.971773   45938 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":702,"bootTime":1716200280,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1061-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0520 10:29:41.971892   45938 start.go:139] virtualization:  
	I0520 10:29:41.975850   45938 out.go:177] * [functional-660553] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0520 10:29:41.978057   45938 out.go:177]   - MINIKUBE_LOCATION=18925
	I0520 10:29:41.978104   45938 notify.go:220] Checking for updates...
	I0520 10:29:41.980318   45938 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0520 10:29:41.982168   45938 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	I0520 10:29:41.984074   45938 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	I0520 10:29:41.986247   45938 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0520 10:29:41.988453   45938 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0520 10:29:41.991196   45938 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 10:29:41.991767   45938 driver.go:392] Setting default libvirt URI to qemu:///system
	I0520 10:29:42.020273   45938 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0520 10:29:42.020409   45938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:29:42.167580   45938 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2024-05-20 10:29:42.142715166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:29:42.167696   45938 docker.go:295] overlay module found
	I0520 10:29:42.170329   45938 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0520 10:29:42.172273   45938 start.go:297] selected driver: docker
	I0520 10:29:42.172299   45938 start.go:901] validating driver "docker" against &{Name:functional-660553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-660553 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0520 10:29:42.172426   45938 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0520 10:29:42.176303   45938 out.go:177] 
	W0520 10:29:42.179102   45938 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0520 10:29:42.181612   45938 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-660553 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-660553 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-829v6" [b05de03c-7347-4eed-8a46-fe7873134e15] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-829v6" [b05de03c-7347-4eed-8a46-fe7873134e15] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004333822s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30810
functional_test.go:1671: http://192.168.49.2:30810: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-829v6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30810
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6e524fcd-81f8-4daa-82f7-bd02e6b9a0d4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004249518s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-660553 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-660553 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-660553 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-660553 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3724fe05-283c-4073-8332-f11d8d6b5cfe] Pending
helpers_test.go:344: "sp-pod" [3724fe05-283c-4073-8332-f11d8d6b5cfe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3724fe05-283c-4073-8332-f11d8d6b5cfe] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004426674s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-660553 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-660553 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-660553 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0e08b73e-1e24-43a7-bfc7-862b2491b82e] Pending
helpers_test.go:344: "sp-pod" [0e08b73e-1e24-43a7-bfc7-862b2491b82e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0e08b73e-1e24-43a7-bfc7-862b2491b82e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.005162565s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-660553 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh -n functional-660553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 cp functional-660553:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1555423695/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh -n functional-660553 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh -n functional-660553 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7512/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "sudo cat /etc/test/nested/copy/7512/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7512.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "sudo cat /etc/ssl/certs/7512.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7512.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "sudo cat /usr/share/ca-certificates/7512.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/75122.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "sudo cat /etc/ssl/certs/75122.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/75122.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "sudo cat /usr/share/ca-certificates/75122.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-660553 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-660553 ssh "sudo systemctl is-active crio": exit status 1 (463.006752ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-660553 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-660553 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-660553 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-660553 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 40905: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-660553 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-660553
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-660553
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-660553 image ls --format short --alsologtostderr:
I0520 10:29:44.003453   46401 out.go:291] Setting OutFile to fd 1 ...
I0520 10:29:44.003654   46401 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:29:44.003662   46401 out.go:304] Setting ErrFile to fd 2...
I0520 10:29:44.003667   46401 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:29:44.003918   46401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
I0520 10:29:44.004521   46401 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 10:29:44.004645   46401 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 10:29:44.006634   46401 cli_runner.go:164] Run: docker container inspect functional-660553 --format={{.State.Status}}
I0520 10:29:44.031560   46401 ssh_runner.go:195] Run: systemctl --version
I0520 10:29:44.031624   46401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-660553
I0520 10:29:44.057148   46401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/functional-660553/id_rsa Username:docker}
I0520 10:29:44.145406   46401 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-660553 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.30.1           | 163ff818d154d | 60.5MB |
| registry.k8s.io/kube-proxy                  | v1.30.1           | 05eccb821e159 | 87.9MB |
| docker.io/library/nginx                     | latest            | 8dd77ef2d82ea | 193MB  |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/localhost/my-image                | functional-660553 | 8d64d8d1c090d | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-660553 | 64c33b26123c3 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.30.1           | 234ac56e455be | 107MB  |
| docker.io/library/nginx                     | alpine            | 9d6767b714bf1 | 49.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-apiserver              | v1.30.1           | 988b55d423baf | 112MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/google-containers/addon-resizer      | functional-660553 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-660553 image ls --format table --alsologtostderr:
I0520 10:29:48.197441   46974 out.go:291] Setting OutFile to fd 1 ...
I0520 10:29:48.197862   46974 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:29:48.197897   46974 out.go:304] Setting ErrFile to fd 2...
I0520 10:29:48.197918   46974 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:29:48.198203   46974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
I0520 10:29:48.198858   46974 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 10:29:48.199044   46974 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 10:29:48.199572   46974 cli_runner.go:164] Run: docker container inspect functional-660553 --format={{.State.Status}}
I0520 10:29:48.224637   46974 ssh_runner.go:195] Run: systemctl --version
I0520 10:29:48.224700   46974 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-660553
I0520 10:29:48.246681   46974 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/functional-660553/id_rsa Username:docker}
I0520 10:29:48.337876   46974 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/05/20 10:29:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-660553 image ls --format json --alsologtostderr:
[{"id":"9d6767b714bf1ecd2cdab75b590f2c572ac33743c7786ef5d619f7b088dbe0bb","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"49700000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8d64d8d1c090d3d48489e5d7e724786513155057c70dec5984c16f4a9314cec7","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-660553"],"size":"1410000"},{"id":"988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"112000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d
86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"107000000"},{"id":"05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"87900000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/a
ddon-resizer:functional-660553"],"size":"32900000"},{"id":"64c33b26123c39c37c3cd4315c8a370f3518608dd912f16c8ab6738b0aa3f09f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-660553"],"size":"30"},{"id":"8dd77ef2d82eade8dcf2c08ea032bd9cba04c9d28ace2ccf08ad6804c27bf14f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"60500000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-660553 image ls --format json --alsologtostderr:
I0520 10:29:47.917565   46943 out.go:291] Setting OutFile to fd 1 ...
I0520 10:29:47.917933   46943 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:29:47.917943   46943 out.go:304] Setting ErrFile to fd 2...
I0520 10:29:47.917948   46943 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:29:47.918488   46943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
I0520 10:29:47.920274   46943 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 10:29:47.920770   46943 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 10:29:47.922136   46943 cli_runner.go:164] Run: docker container inspect functional-660553 --format={{.State.Status}}
I0520 10:29:47.963726   46943 ssh_runner.go:195] Run: systemctl --version
I0520 10:29:47.963788   46943 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-660553
I0520 10:29:47.983168   46943 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/functional-660553/id_rsa Username:docker}
I0520 10:29:48.074078   46943 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-660553 image ls --format yaml --alsologtostderr:
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8dd77ef2d82eade8dcf2c08ea032bd9cba04c9d28ace2ccf08ad6804c27bf14f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 64c33b26123c39c37c3cd4315c8a370f3518608dd912f16c8ab6738b0aa3f09f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-660553
size: "30"
- id: 988b55d423baf54b1515b7560890c84d1822a8dfbdfcecfb2576f8cb5c2b28ee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "112000000"
- id: 163ff818d154da230c6c8a6211a0d939689ba04df75dba2c3742fe740ffdf44a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "60500000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 05eccb821e159092de1ef74c21cd16c403e1cda549ba56687391f50f087310ee
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "87900000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-660553
size: "32900000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 234ac56e455bef8ae70f120f49fcf5daea5c4171c5ffbf8f9e791a516fef99b4
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "107000000"
- id: 9d6767b714bf1ecd2cdab75b590f2c572ac33743c7786ef5d619f7b088dbe0bb
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "49700000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-660553 image ls --format yaml --alsologtostderr:
I0520 10:29:44.256493   46428 out.go:291] Setting OutFile to fd 1 ...
I0520 10:29:44.256723   46428 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:29:44.256745   46428 out.go:304] Setting ErrFile to fd 2...
I0520 10:29:44.256766   46428 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:29:44.257081   46428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
I0520 10:29:44.257735   46428 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 10:29:44.257891   46428 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 10:29:44.258376   46428 cli_runner.go:164] Run: docker container inspect functional-660553 --format={{.State.Status}}
I0520 10:29:44.279031   46428 ssh_runner.go:195] Run: systemctl --version
I0520 10:29:44.279083   46428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-660553
I0520 10:29:44.303480   46428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/functional-660553/id_rsa Username:docker}
I0520 10:29:44.402952   46428 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-660553 ssh pgrep buildkitd: exit status 1 (413.59209ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image build -t localhost/my-image:functional-660553 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-660553 image build -t localhost/my-image:functional-660553 testdata/build --alsologtostderr: (2.729597735s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-660553 image build -t localhost/my-image:functional-660553 testdata/build --alsologtostderr:
I0520 10:29:44.928782   46667 out.go:291] Setting OutFile to fd 1 ...
I0520 10:29:44.929078   46667 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:29:44.929109   46667 out.go:304] Setting ErrFile to fd 2...
I0520 10:29:44.929132   46667 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0520 10:29:44.929424   46667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
I0520 10:29:44.930116   46667 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 10:29:44.930985   46667 config.go:182] Loaded profile config "functional-660553": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0520 10:29:44.931520   46667 cli_runner.go:164] Run: docker container inspect functional-660553 --format={{.State.Status}}
I0520 10:29:44.961128   46667 ssh_runner.go:195] Run: systemctl --version
I0520 10:29:44.961188   46667 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-660553
I0520 10:29:44.981550   46667 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/functional-660553/id_rsa Username:docker}
I0520 10:29:45.079787   46667 build_images.go:161] Building image from path: /tmp/build.929743097.tar
I0520 10:29:45.079888   46667 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0520 10:29:45.094222   46667 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.929743097.tar
I0520 10:29:45.100218   46667 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.929743097.tar: stat -c "%s %y" /var/lib/minikube/build/build.929743097.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.929743097.tar': No such file or directory
I0520 10:29:45.100261   46667 ssh_runner.go:362] scp /tmp/build.929743097.tar --> /var/lib/minikube/build/build.929743097.tar (3072 bytes)
I0520 10:29:45.137773   46667 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.929743097
I0520 10:29:45.148800   46667 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.929743097 -xf /var/lib/minikube/build/build.929743097.tar
I0520 10:29:45.160158   46667 docker.go:360] Building image: /var/lib/minikube/build/build.929743097
I0520 10:29:45.160253   46667 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-660553 /var/lib/minikube/build/build.929743097
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:8d64d8d1c090d3d48489e5d7e724786513155057c70dec5984c16f4a9314cec7
#8 writing image sha256:8d64d8d1c090d3d48489e5d7e724786513155057c70dec5984c16f4a9314cec7 done
#8 naming to localhost/my-image:functional-660553 done
#8 DONE 0.1s
I0520 10:29:47.552282   46667 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-660553 /var/lib/minikube/build/build.929743097: (2.39200195s)
I0520 10:29:47.552348   46667 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.929743097
I0520 10:29:47.565426   46667 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.929743097.tar
I0520 10:29:47.581184   46667 build_images.go:217] Built localhost/my-image:functional-660553 from /tmp/build.929743097.tar
I0520 10:29:47.581266   46667 build_images.go:133] succeeded building to: functional-660553
I0520 10:29:47.581285   46667 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.504991454s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-660553
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-660553 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-660553 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [223f3c4f-db2a-4023-b12c-565a76bb83d5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [223f3c4f-db2a-4023-b12c-565a76bb83d5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003855712s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image load --daemon gcr.io/google-containers/addon-resizer:functional-660553 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-660553 image load --daemon gcr.io/google-containers/addon-resizer:functional-660553 --alsologtostderr: (3.470641539s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image load --daemon gcr.io/google-containers/addon-resizer:functional-660553 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-660553 image load --daemon gcr.io/google-containers/addon-resizer:functional-660553 --alsologtostderr: (2.602329789s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.681053859s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-660553
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image load --daemon gcr.io/google-containers/addon-resizer:functional-660553 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-660553 image load --daemon gcr.io/google-containers/addon-resizer:functional-660553 --alsologtostderr: (3.913366067s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-660553 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.128.81 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-660553 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-660553 docker-env) && out/minikube-linux-arm64 status -p functional-660553"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-660553 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image save gcr.io/google-containers/addon-resizer:functional-660553 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-660553 image save gcr.io/google-containers/addon-resizer:functional-660553 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.095047097s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image rm gcr.io/google-containers/addon-resizer:functional-660553 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-660553 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.17136262s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-660553 /tmp/TestFunctionalparallelMountCmdany-port1746217284/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716200950377507763" to /tmp/TestFunctionalparallelMountCmdany-port1746217284/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716200950377507763" to /tmp/TestFunctionalparallelMountCmdany-port1746217284/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716200950377507763" to /tmp/TestFunctionalparallelMountCmdany-port1746217284/001/test-1716200950377507763
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-660553 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (578.78839ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 20 10:29 created-by-test
-rw-r--r-- 1 docker docker 24 May 20 10:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 20 10:29 test-1716200950377507763
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh cat /mount-9p/test-1716200950377507763
E0520 10:29:12.142216    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-660553 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a521f3e0-b325-429c-9c88-4d4ec8f2e5f8] Pending
helpers_test.go:344: "busybox-mount" [a521f3e0-b325-429c-9c88-4d4ec8f2e5f8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a521f3e0-b325-429c-9c88-4d4ec8f2e5f8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a521f3e0-b325-429c-9c88-4d4ec8f2e5f8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004064252s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-660553 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-660553 /tmp/TestFunctionalparallelMountCmdany-port1746217284/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-660553
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 image save --daemon gcr.io/google-containers/addon-resizer:functional-660553 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-660553 image save --daemon gcr.io/google-containers/addon-resizer:functional-660553 --alsologtostderr: (1.11243285s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-660553
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-660553 /tmp/TestFunctionalparallelMountCmdspecific-port4051883524/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-660553 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (479.914204ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-660553 /tmp/TestFunctionalparallelMountCmdspecific-port4051883524/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-660553 ssh "sudo umount -f /mount-9p": exit status 1 (307.524232ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-660553 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-660553 /tmp/TestFunctionalparallelMountCmdspecific-port4051883524/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-660553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1452367714/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-660553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1452367714/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-660553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1452367714/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-660553 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-660553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1452367714/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-660553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1452367714/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-660553 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1452367714/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-660553 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-660553 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-d9cxt" [cdf3e1da-96c1-401e-98cd-04df7cfdc7ae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-d9cxt" [cdf3e1da-96c1-401e-98cd-04df7cfdc7ae] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004021244s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 service list -o json
functional_test.go:1490: Took "581.59021ms" to run "out/minikube-linux-arm64 -p functional-660553 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "374.370964ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "51.603804ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "388.714949ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "87.591865ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32032
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-660553 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32032
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-660553
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-660553
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-660553
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (132.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-256268 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0520 10:30:34.062703    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-256268 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m12.146741409s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (132.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (45.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-256268 -- rollout status deployment/busybox: (4.842360928s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0520 10:32:50.218949    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-7pp8j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-jm58q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-wgbhf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-7pp8j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-jm58q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-wgbhf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-7pp8j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-jm58q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-wgbhf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (45.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-7pp8j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-7pp8j -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-jm58q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-jm58q -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-wgbhf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-256268 -- exec busybox-fc5497c4f-wgbhf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-256268 -v=7 --alsologtostderr
E0520 10:33:17.903048    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-256268 -v=7 --alsologtostderr: (26.620805105s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-256268 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp testdata/cp-test.txt ha-256268:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3359086670/001/cp-test_ha-256268.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268:/home/docker/cp-test.txt ha-256268-m02:/home/docker/cp-test_ha-256268_ha-256268-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m02 "sudo cat /home/docker/cp-test_ha-256268_ha-256268-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268:/home/docker/cp-test.txt ha-256268-m03:/home/docker/cp-test_ha-256268_ha-256268-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m03 "sudo cat /home/docker/cp-test_ha-256268_ha-256268-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268:/home/docker/cp-test.txt ha-256268-m04:/home/docker/cp-test_ha-256268_ha-256268-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m04 "sudo cat /home/docker/cp-test_ha-256268_ha-256268-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp testdata/cp-test.txt ha-256268-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3359086670/001/cp-test_ha-256268-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m02:/home/docker/cp-test.txt ha-256268:/home/docker/cp-test_ha-256268-m02_ha-256268.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268 "sudo cat /home/docker/cp-test_ha-256268-m02_ha-256268.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m02:/home/docker/cp-test.txt ha-256268-m03:/home/docker/cp-test_ha-256268-m02_ha-256268-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m03 "sudo cat /home/docker/cp-test_ha-256268-m02_ha-256268-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m02:/home/docker/cp-test.txt ha-256268-m04:/home/docker/cp-test_ha-256268-m02_ha-256268-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m04 "sudo cat /home/docker/cp-test_ha-256268-m02_ha-256268-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp testdata/cp-test.txt ha-256268-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3359086670/001/cp-test_ha-256268-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m03:/home/docker/cp-test.txt ha-256268:/home/docker/cp-test_ha-256268-m03_ha-256268.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268 "sudo cat /home/docker/cp-test_ha-256268-m03_ha-256268.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m03:/home/docker/cp-test.txt ha-256268-m02:/home/docker/cp-test_ha-256268-m03_ha-256268-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m02 "sudo cat /home/docker/cp-test_ha-256268-m03_ha-256268-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m03:/home/docker/cp-test.txt ha-256268-m04:/home/docker/cp-test_ha-256268-m03_ha-256268-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m04 "sudo cat /home/docker/cp-test_ha-256268-m03_ha-256268-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp testdata/cp-test.txt ha-256268-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3359086670/001/cp-test_ha-256268-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m04:/home/docker/cp-test.txt ha-256268:/home/docker/cp-test_ha-256268-m04_ha-256268.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268 "sudo cat /home/docker/cp-test_ha-256268-m04_ha-256268.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m04:/home/docker/cp-test.txt ha-256268-m02:/home/docker/cp-test_ha-256268-m04_ha-256268-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m02 "sudo cat /home/docker/cp-test_ha-256268-m04_ha-256268-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 cp ha-256268-m04:/home/docker/cp-test.txt ha-256268-m03:/home/docker/cp-test_ha-256268-m04_ha-256268-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 ssh -n ha-256268-m03 "sudo cat /home/docker/cp-test_ha-256268-m04_ha-256268-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-256268 node stop m02 -v=7 --alsologtostderr: (10.988314935s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 status -v=7 --alsologtostderr
E0520 10:33:53.302138    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:33:53.307408    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:33:53.318502    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:33:53.339339    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:33:53.379583    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:33:53.460423    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-256268 status -v=7 --alsologtostderr: exit status 7 (694.149629ms)

                                                
                                                
-- stdout --
	ha-256268
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-256268-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-256268-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-256268-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:33:52.950350   67883 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:33:52.950634   67883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:33:52.950647   67883 out.go:304] Setting ErrFile to fd 2...
	I0520 10:33:52.950652   67883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:33:52.951318   67883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
	I0520 10:33:52.951590   67883 out.go:298] Setting JSON to false
	I0520 10:33:52.951659   67883 mustload.go:65] Loading cluster: ha-256268
	I0520 10:33:52.951735   67883 notify.go:220] Checking for updates...
	I0520 10:33:52.953219   67883 config.go:182] Loaded profile config "ha-256268": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 10:33:52.953274   67883 status.go:255] checking status of ha-256268 ...
	I0520 10:33:52.954217   67883 cli_runner.go:164] Run: docker container inspect ha-256268 --format={{.State.Status}}
	I0520 10:33:52.971875   67883 status.go:330] ha-256268 host status = "Running" (err=<nil>)
	I0520 10:33:52.971895   67883 host.go:66] Checking if "ha-256268" exists ...
	I0520 10:33:52.972178   67883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256268
	I0520 10:33:52.989979   67883 host.go:66] Checking if "ha-256268" exists ...
	I0520 10:33:52.990381   67883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:33:52.990446   67883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256268
	I0520 10:33:53.014493   67883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/ha-256268/id_rsa Username:docker}
	I0520 10:33:53.102072   67883 ssh_runner.go:195] Run: systemctl --version
	I0520 10:33:53.106227   67883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:33:53.118289   67883 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:33:53.188133   67883 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:true NGoroutines:72 SystemTime:2024-05-20 10:33:53.178563323 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:33:53.188737   67883 kubeconfig.go:125] found "ha-256268" server: "https://192.168.49.254:8443"
	I0520 10:33:53.188805   67883 api_server.go:166] Checking apiserver status ...
	I0520 10:33:53.188858   67883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:33:53.201466   67883 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2134/cgroup
	I0520 10:33:53.210989   67883 api_server.go:182] apiserver freezer: "9:freezer:/docker/77b08fea8f6971c73e926a9c38677c671ee02fd6bd5259f3598b6f8b702b2e9c/kubepods/burstable/podebf76b8b2df1ea82e930edc76e7830bd/470fc5433a6c5fd7a5d00112ec88903bb9bebfed8a80b72c73d71d1fd7d453cb"
	I0520 10:33:53.211061   67883 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/77b08fea8f6971c73e926a9c38677c671ee02fd6bd5259f3598b6f8b702b2e9c/kubepods/burstable/podebf76b8b2df1ea82e930edc76e7830bd/470fc5433a6c5fd7a5d00112ec88903bb9bebfed8a80b72c73d71d1fd7d453cb/freezer.state
	I0520 10:33:53.219607   67883 api_server.go:204] freezer state: "THAWED"
	I0520 10:33:53.219636   67883 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0520 10:33:53.227169   67883 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0520 10:33:53.227194   67883 status.go:422] ha-256268 apiserver status = Running (err=<nil>)
	I0520 10:33:53.227205   67883 status.go:257] ha-256268 status: &{Name:ha-256268 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:33:53.227220   67883 status.go:255] checking status of ha-256268-m02 ...
	I0520 10:33:53.227512   67883 cli_runner.go:164] Run: docker container inspect ha-256268-m02 --format={{.State.Status}}
	I0520 10:33:53.244467   67883 status.go:330] ha-256268-m02 host status = "Stopped" (err=<nil>)
	I0520 10:33:53.244489   67883 status.go:343] host is not running, skipping remaining checks
	I0520 10:33:53.244496   67883 status.go:257] ha-256268-m02 status: &{Name:ha-256268-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:33:53.244524   67883 status.go:255] checking status of ha-256268-m03 ...
	I0520 10:33:53.244811   67883 cli_runner.go:164] Run: docker container inspect ha-256268-m03 --format={{.State.Status}}
	I0520 10:33:53.260937   67883 status.go:330] ha-256268-m03 host status = "Running" (err=<nil>)
	I0520 10:33:53.260960   67883 host.go:66] Checking if "ha-256268-m03" exists ...
	I0520 10:33:53.261367   67883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256268-m03
	I0520 10:33:53.279230   67883 host.go:66] Checking if "ha-256268-m03" exists ...
	I0520 10:33:53.279533   67883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:33:53.279576   67883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256268-m03
	I0520 10:33:53.296944   67883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/ha-256268-m03/id_rsa Username:docker}
	I0520 10:33:53.386540   67883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:33:53.399912   67883 kubeconfig.go:125] found "ha-256268" server: "https://192.168.49.254:8443"
	I0520 10:33:53.399984   67883 api_server.go:166] Checking apiserver status ...
	I0520 10:33:53.400050   67883 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:33:53.412371   67883 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2115/cgroup
	I0520 10:33:53.422130   67883 api_server.go:182] apiserver freezer: "9:freezer:/docker/ca910e87a5c323a4c81b2c9e5daccaf915b22faffb40e70d713b2cbd7ebe8fb6/kubepods/burstable/podde63ce3be3f08f857587b7da21f9127a/202642c5ec7223cd376a8056e7a47a611762fdbe3535fa66c45b9dc490579a22"
	I0520 10:33:53.422225   67883 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ca910e87a5c323a4c81b2c9e5daccaf915b22faffb40e70d713b2cbd7ebe8fb6/kubepods/burstable/podde63ce3be3f08f857587b7da21f9127a/202642c5ec7223cd376a8056e7a47a611762fdbe3535fa66c45b9dc490579a22/freezer.state
	I0520 10:33:53.430967   67883 api_server.go:204] freezer state: "THAWED"
	I0520 10:33:53.431010   67883 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0520 10:33:53.438948   67883 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0520 10:33:53.438977   67883 status.go:422] ha-256268-m03 apiserver status = Running (err=<nil>)
	I0520 10:33:53.438989   67883 status.go:257] ha-256268-m03 status: &{Name:ha-256268-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:33:53.439015   67883 status.go:255] checking status of ha-256268-m04 ...
	I0520 10:33:53.439323   67883 cli_runner.go:164] Run: docker container inspect ha-256268-m04 --format={{.State.Status}}
	I0520 10:33:53.455927   67883 status.go:330] ha-256268-m04 host status = "Running" (err=<nil>)
	I0520 10:33:53.455953   67883 host.go:66] Checking if "ha-256268-m04" exists ...
	I0520 10:33:53.456256   67883 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-256268-m04
	I0520 10:33:53.473803   67883 host.go:66] Checking if "ha-256268-m04" exists ...
	I0520 10:33:53.474078   67883 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:33:53.474123   67883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-256268-m04
	I0520 10:33:53.490401   67883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/ha-256268-m04/id_rsa Username:docker}
	I0520 10:33:53.578070   67883 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:33:53.590229   67883 status.go:257] ha-256268-m04 status: &{Name:ha-256268-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0520 10:33:53.621167    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:33:53.941512    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 node start m02 -v=7 --alsologtostderr
E0520 10:33:54.581718    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:33:55.861846    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:33:58.432532    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:34:03.553007    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:34:13.794112    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:34:34.274802    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-256268 node start m02 -v=7 --alsologtostderr: (57.201310229s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-256268 status -v=7 --alsologtostderr: (1.028101967s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (58.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (254.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-256268 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-256268 -v=7 --alsologtostderr
E0520 10:35:15.234943    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-256268 -v=7 --alsologtostderr: (33.713084184s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-256268 --wait=true -v=7 --alsologtostderr
E0520 10:36:37.155332    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:37:50.216774    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 10:38:53.300961    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-256268 --wait=true -v=7 --alsologtostderr: (3m40.70334037s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-256268
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (254.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-256268 node delete m03 -v=7 --alsologtostderr: (11.491507824s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 stop -v=7 --alsologtostderr
E0520 10:39:20.996479    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-256268 stop -v=7 --alsologtostderr: (32.675580791s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-256268 status -v=7 --alsologtostderr: exit status 7 (100.043773ms)

                                                
                                                
-- stdout --
	ha-256268
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-256268-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-256268-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:39:53.635949   94358 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:39:53.636106   94358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:39:53.636116   94358 out.go:304] Setting ErrFile to fd 2...
	I0520 10:39:53.636122   94358 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:39:53.636366   94358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
	I0520 10:39:53.636545   94358 out.go:298] Setting JSON to false
	I0520 10:39:53.636584   94358 mustload.go:65] Loading cluster: ha-256268
	I0520 10:39:53.636658   94358 notify.go:220] Checking for updates...
	I0520 10:39:53.637579   94358 config.go:182] Loaded profile config "ha-256268": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 10:39:53.637604   94358 status.go:255] checking status of ha-256268 ...
	I0520 10:39:53.638065   94358 cli_runner.go:164] Run: docker container inspect ha-256268 --format={{.State.Status}}
	I0520 10:39:53.654822   94358 status.go:330] ha-256268 host status = "Stopped" (err=<nil>)
	I0520 10:39:53.654847   94358 status.go:343] host is not running, skipping remaining checks
	I0520 10:39:53.654856   94358 status.go:257] ha-256268 status: &{Name:ha-256268 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:39:53.654878   94358 status.go:255] checking status of ha-256268-m02 ...
	I0520 10:39:53.655177   94358 cli_runner.go:164] Run: docker container inspect ha-256268-m02 --format={{.State.Status}}
	I0520 10:39:53.672629   94358 status.go:330] ha-256268-m02 host status = "Stopped" (err=<nil>)
	I0520 10:39:53.672653   94358 status.go:343] host is not running, skipping remaining checks
	I0520 10:39:53.672661   94358 status.go:257] ha-256268-m02 status: &{Name:ha-256268-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:39:53.672698   94358 status.go:255] checking status of ha-256268-m04 ...
	I0520 10:39:53.673001   94358 cli_runner.go:164] Run: docker container inspect ha-256268-m04 --format={{.State.Status}}
	I0520 10:39:53.691792   94358 status.go:330] ha-256268-m04 host status = "Stopped" (err=<nil>)
	I0520 10:39:53.691815   94358 status.go:343] host is not running, skipping remaining checks
	I0520 10:39:53.691823   94358 status.go:257] ha-256268-m04 status: &{Name:ha-256268-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (87.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-256268 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-256268 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m26.03898488s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (87.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-256268 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-256268 --control-plane -v=7 --alsologtostderr: (44.490364914s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-256268 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (33.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-734808 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-734808 --driver=docker  --container-runtime=docker: (33.072378256s)
--- PASS: TestImageBuild/serial/Setup (33.07s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-734808
E0520 10:42:50.216720    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-734808: (1.825496529s)
--- PASS: TestImageBuild/serial/NormalBuild (1.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-734808
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-734808
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.70s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-734808
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-878520 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0520 10:43:53.300714    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 10:44:13.263278    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-878520 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m23.399147105s)
--- PASS: TestJSONOutput/start/Command (83.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-878520 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-878520 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.9s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-878520 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-878520 --output=json --user=testUser: (10.903431281s)
--- PASS: TestJSONOutput/stop/Command (10.90s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-947088 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-947088 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.187492ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dc2839ba-ed2b-4791-8e9e-d87a3cb3ba72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-947088] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"35921387-b55a-45bf-8322-9055d4fd2c46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18925"}}
	{"specversion":"1.0","id":"d08c5cc2-a07b-435c-90ed-05c0c41bdbc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2246d6c1-45ae-47d5-a517-d4fb2d121687","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig"}}
	{"specversion":"1.0","id":"b22761be-9e7b-403a-abc6-fc650bae8849","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube"}}
	{"specversion":"1.0","id":"6345c76f-55fa-48b9-b8e2-93a3de66580b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a35a23f4-b94e-463a-89be-27c49b4f49c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"162fe421-a654-4da3-b5bb-81a9edc2d4c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-947088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-947088
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-846262 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-846262 --network=: (28.865863864s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-846262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-846262
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-846262: (2.160835042s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.04s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-342675 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-342675 --network=bridge: (34.053673906s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-342675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-342675
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-342675: (2.014532268s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.09s)

                                                
                                    
x
+
TestKicExistingNetwork (36.48s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-412796 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-412796 --network=existing-network: (34.326145546s)
helpers_test.go:175: Cleaning up "existing-network-412796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-412796
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-412796: (2.013904152s)
--- PASS: TestKicExistingNetwork (36.48s)

                                                
                                    
x
+
TestKicCustomSubnet (33.56s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-584650 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-584650 --subnet=192.168.60.0/24: (31.496474196s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-584650 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-584650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-584650
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-584650: (2.044054477s)
--- PASS: TestKicCustomSubnet (33.56s)

                                                
                                    
x
+
TestKicStaticIP (34.41s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-509659 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-509659 --static-ip=192.168.200.200: (32.246424244s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-509659 ip
helpers_test.go:175: Cleaning up "static-ip-509659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-509659
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-509659: (2.020329122s)
--- PASS: TestKicStaticIP (34.41s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-248036 --driver=docker  --container-runtime=docker
E0520 10:47:50.216658    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-248036 --driver=docker  --container-runtime=docker: (29.629408042s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-250933 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-250933 --driver=docker  --container-runtime=docker: (36.856587162s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-248036
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-250933
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-250933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-250933
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-250933: (2.051253936s)
helpers_test.go:175: Cleaning up "first-248036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-248036
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-248036: (2.146799537s)
--- PASS: TestMinikubeProfile (71.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-446138 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-446138 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.836218834s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-446138 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.36s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-458782 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-458782 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.355488742s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-458782 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-446138 --alsologtostderr -v=5
E0520 10:48:53.301562    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-446138 --alsologtostderr -v=5: (1.446249494s)
--- PASS: TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-458782 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-458782
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-458782: (1.194504456s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.25s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-458782
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-458782: (7.246001328s)
--- PASS: TestMountStart/serial/RestartStopped (8.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-458782 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-202742 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-202742 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m4.75826382s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (45.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-202742 -- rollout status deployment/busybox: (2.315163977s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0520 10:50:16.357161    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- exec busybox-fc5497c4f-sfvfn -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- exec busybox-fc5497c4f-wnqqr -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- exec busybox-fc5497c4f-sfvfn -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- exec busybox-fc5497c4f-wnqqr -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- exec busybox-fc5497c4f-sfvfn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- exec busybox-fc5497c4f-wnqqr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (45.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- exec busybox-fc5497c4f-sfvfn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- exec busybox-fc5497c4f-sfvfn -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- exec busybox-fc5497c4f-wnqqr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-202742 -- exec busybox-fc5497c4f-wnqqr -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-202742 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-202742 -v 3 --alsologtostderr: (17.113747937s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.87s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-202742 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp testdata/cp-test.txt multinode-202742:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp multinode-202742:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile645325672/001/cp-test_multinode-202742.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp multinode-202742:/home/docker/cp-test.txt multinode-202742-m02:/home/docker/cp-test_multinode-202742_multinode-202742-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m02 "sudo cat /home/docker/cp-test_multinode-202742_multinode-202742-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp multinode-202742:/home/docker/cp-test.txt multinode-202742-m03:/home/docker/cp-test_multinode-202742_multinode-202742-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m03 "sudo cat /home/docker/cp-test_multinode-202742_multinode-202742-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp testdata/cp-test.txt multinode-202742-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp multinode-202742-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile645325672/001/cp-test_multinode-202742-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp multinode-202742-m02:/home/docker/cp-test.txt multinode-202742:/home/docker/cp-test_multinode-202742-m02_multinode-202742.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742 "sudo cat /home/docker/cp-test_multinode-202742-m02_multinode-202742.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp multinode-202742-m02:/home/docker/cp-test.txt multinode-202742-m03:/home/docker/cp-test_multinode-202742-m02_multinode-202742-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m03 "sudo cat /home/docker/cp-test_multinode-202742-m02_multinode-202742-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp testdata/cp-test.txt multinode-202742-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp multinode-202742-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile645325672/001/cp-test_multinode-202742-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp multinode-202742-m03:/home/docker/cp-test.txt multinode-202742:/home/docker/cp-test_multinode-202742-m03_multinode-202742.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742 "sudo cat /home/docker/cp-test_multinode-202742-m03_multinode-202742.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 cp multinode-202742-m03:/home/docker/cp-test.txt multinode-202742-m02:/home/docker/cp-test_multinode-202742-m03_multinode-202742-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 ssh -n multinode-202742-m02 "sudo cat /home/docker/cp-test_multinode-202742-m03_multinode-202742-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-202742 node stop m03: (1.204175108s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-202742 status: exit status 7 (499.049586ms)

                                                
                                                
-- stdout --
	multinode-202742
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-202742-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-202742-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-202742 status --alsologtostderr: exit status 7 (490.638743ms)

                                                
                                                
-- stdout --
	multinode-202742
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-202742-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-202742-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:51:27.368151  162947 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:51:27.368433  162947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:51:27.368447  162947 out.go:304] Setting ErrFile to fd 2...
	I0520 10:51:27.368453  162947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:51:27.369003  162947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
	I0520 10:51:27.369286  162947 out.go:298] Setting JSON to false
	I0520 10:51:27.369343  162947 mustload.go:65] Loading cluster: multinode-202742
	I0520 10:51:27.369409  162947 notify.go:220] Checking for updates...
	I0520 10:51:27.370383  162947 config.go:182] Loaded profile config "multinode-202742": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 10:51:27.370407  162947 status.go:255] checking status of multinode-202742 ...
	I0520 10:51:27.371061  162947 cli_runner.go:164] Run: docker container inspect multinode-202742 --format={{.State.Status}}
	I0520 10:51:27.388344  162947 status.go:330] multinode-202742 host status = "Running" (err=<nil>)
	I0520 10:51:27.388378  162947 host.go:66] Checking if "multinode-202742" exists ...
	I0520 10:51:27.388673  162947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-202742
	I0520 10:51:27.404398  162947 host.go:66] Checking if "multinode-202742" exists ...
	I0520 10:51:27.404722  162947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:51:27.404842  162947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-202742
	I0520 10:51:27.425374  162947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32912 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/multinode-202742/id_rsa Username:docker}
	I0520 10:51:27.514303  162947 ssh_runner.go:195] Run: systemctl --version
	I0520 10:51:27.518604  162947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:51:27.529986  162947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0520 10:51:27.585485  162947 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-05-20 10:51:27.575942852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1061-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214966272 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0]] Warnings:<nil>}}
	I0520 10:51:27.586174  162947 kubeconfig.go:125] found "multinode-202742" server: "https://192.168.67.2:8443"
	I0520 10:51:27.586208  162947 api_server.go:166] Checking apiserver status ...
	I0520 10:51:27.586272  162947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0520 10:51:27.598175  162947 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2133/cgroup
	I0520 10:51:27.607883  162947 api_server.go:182] apiserver freezer: "9:freezer:/docker/3d0da164ef737f5f0f3c674d7f4ba325acdb2304f99b776950778a445780f648/kubepods/burstable/pod68bb631548829fe6d981b8450dd764d4/2383a07c59f74e0885e0484ddfa6efbe0f67fccc7cc616bb537922a4b5899740"
	I0520 10:51:27.607952  162947 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3d0da164ef737f5f0f3c674d7f4ba325acdb2304f99b776950778a445780f648/kubepods/burstable/pod68bb631548829fe6d981b8450dd764d4/2383a07c59f74e0885e0484ddfa6efbe0f67fccc7cc616bb537922a4b5899740/freezer.state
	I0520 10:51:27.616602  162947 api_server.go:204] freezer state: "THAWED"
	I0520 10:51:27.616630  162947 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0520 10:51:27.624484  162947 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0520 10:51:27.624516  162947 status.go:422] multinode-202742 apiserver status = Running (err=<nil>)
	I0520 10:51:27.624528  162947 status.go:257] multinode-202742 status: &{Name:multinode-202742 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:51:27.624552  162947 status.go:255] checking status of multinode-202742-m02 ...
	I0520 10:51:27.624874  162947 cli_runner.go:164] Run: docker container inspect multinode-202742-m02 --format={{.State.Status}}
	I0520 10:51:27.640448  162947 status.go:330] multinode-202742-m02 host status = "Running" (err=<nil>)
	I0520 10:51:27.640473  162947 host.go:66] Checking if "multinode-202742-m02" exists ...
	I0520 10:51:27.640776  162947 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-202742-m02
	I0520 10:51:27.659376  162947 host.go:66] Checking if "multinode-202742-m02" exists ...
	I0520 10:51:27.659783  162947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0520 10:51:27.659846  162947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-202742-m02
	I0520 10:51:27.676543  162947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32917 SSHKeyPath:/home/jenkins/minikube-integration/18925-2151/.minikube/machines/multinode-202742-m02/id_rsa Username:docker}
	I0520 10:51:27.762728  162947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0520 10:51:27.774045  162947 status.go:257] multinode-202742-m02 status: &{Name:multinode-202742-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:51:27.774081  162947 status.go:255] checking status of multinode-202742-m03 ...
	I0520 10:51:27.774387  162947 cli_runner.go:164] Run: docker container inspect multinode-202742-m03 --format={{.State.Status}}
	I0520 10:51:27.790347  162947 status.go:330] multinode-202742-m03 host status = "Stopped" (err=<nil>)
	I0520 10:51:27.790372  162947 status.go:343] host is not running, skipping remaining checks
	I0520 10:51:27.790380  162947 status.go:257] multinode-202742-m03 status: &{Name:multinode-202742-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-202742 node start m03 -v=7 --alsologtostderr: (10.205719019s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-202742
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-202742
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-202742: (22.666423072s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-202742 --wait=true -v=8 --alsologtostderr
E0520 10:52:50.216699    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-202742 --wait=true -v=8 --alsologtostderr: (1m33.872829037s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-202742
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-202742 node delete m03: (5.144265037s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 stop
E0520 10:53:53.301758    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-202742 stop: (21.498531571s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-202742 status: exit status 7 (81.964808ms)

                                                
                                                
-- stdout --
	multinode-202742
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-202742-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-202742 status --alsologtostderr: exit status 7 (74.10553ms)

                                                
                                                
-- stdout --
	multinode-202742
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-202742-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0520 10:54:02.831628  175606 out.go:291] Setting OutFile to fd 1 ...
	I0520 10:54:02.831808  175606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:54:02.831820  175606 out.go:304] Setting ErrFile to fd 2...
	I0520 10:54:02.831825  175606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0520 10:54:02.832064  175606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18925-2151/.minikube/bin
	I0520 10:54:02.832237  175606 out.go:298] Setting JSON to false
	I0520 10:54:02.832275  175606 mustload.go:65] Loading cluster: multinode-202742
	I0520 10:54:02.832346  175606 notify.go:220] Checking for updates...
	I0520 10:54:02.833338  175606 config.go:182] Loaded profile config "multinode-202742": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0520 10:54:02.833365  175606 status.go:255] checking status of multinode-202742 ...
	I0520 10:54:02.833842  175606 cli_runner.go:164] Run: docker container inspect multinode-202742 --format={{.State.Status}}
	I0520 10:54:02.850851  175606 status.go:330] multinode-202742 host status = "Stopped" (err=<nil>)
	I0520 10:54:02.850885  175606 status.go:343] host is not running, skipping remaining checks
	I0520 10:54:02.850894  175606 status.go:257] multinode-202742 status: &{Name:multinode-202742 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0520 10:54:02.850939  175606 status.go:255] checking status of multinode-202742-m02 ...
	I0520 10:54:02.851240  175606 cli_runner.go:164] Run: docker container inspect multinode-202742-m02 --format={{.State.Status}}
	I0520 10:54:02.867074  175606 status.go:330] multinode-202742-m02 host status = "Stopped" (err=<nil>)
	I0520 10:54:02.867106  175606 status.go:343] host is not running, skipping remaining checks
	I0520 10:54:02.867113  175606 status.go:257] multinode-202742-m02 status: &{Name:multinode-202742-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.65s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-202742 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-202742 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (55.304951316s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-202742 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (38.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-202742
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-202742-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-202742-m02 --driver=docker  --container-runtime=docker: exit status 14 (118.653912ms)

                                                
                                                
-- stdout --
	* [multinode-202742-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-202742-m02' is duplicated with machine name 'multinode-202742-m02' in profile 'multinode-202742'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-202742-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-202742-m03 --driver=docker  --container-runtime=docker: (35.799974855s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-202742
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-202742: exit status 80 (300.017539ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-202742 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-202742-m03 already exists in multinode-202742-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-202742-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-202742-m03: (2.103104853s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (38.37s)

                                                
                                    
x
+
TestPreload (104.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-682394 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-682394 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m7.028372842s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-682394 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-682394 image pull gcr.io/k8s-minikube/busybox: (1.244002821s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-682394
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-682394: (10.907551197s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-682394 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-682394 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (22.647046233s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-682394 image list
helpers_test.go:175: Cleaning up "test-preload-682394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-682394
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-682394: (2.242070813s)
--- PASS: TestPreload (104.36s)

                                                
                                    
x
+
TestScheduledStopUnix (106.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-053352 --memory=2048 --driver=docker  --container-runtime=docker
E0520 10:57:50.216750    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-053352 --memory=2048 --driver=docker  --container-runtime=docker: (33.08602016s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-053352 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-053352 -n scheduled-stop-053352
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-053352 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-053352 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-053352 -n scheduled-stop-053352
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-053352
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-053352 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0520 10:58:53.301584    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-053352
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-053352: exit status 7 (71.194814ms)

                                                
                                                
-- stdout --
	scheduled-stop-053352
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-053352 -n scheduled-stop-053352
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-053352 -n scheduled-stop-053352: exit status 7 (66.070292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-053352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-053352
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-053352: (1.670408172s)
--- PASS: TestScheduledStopUnix (106.23s)

                                                
                                    
x
+
TestSkaffold (117.11s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1740089388 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-916977 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-916977 --memory=2600 --driver=docker  --container-runtime=docker: (31.440165063s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1740089388 run --minikube-profile skaffold-916977 --kube-context skaffold-916977 --status-check=true --port-forward=false --interactive=false
E0520 11:00:53.263491    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1740089388 run --minikube-profile skaffold-916977 --kube-context skaffold-916977 --status-check=true --port-forward=false --interactive=false: (1m9.97882505s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7465b665bb-9rx9n" [6210f2cb-5330-4b2a-8123-61ef95e409db] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004353399s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7bdbd95c89-z8kxt" [67d1351f-54f9-46be-aea9-2cb5819e7d21] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003144167s
helpers_test.go:175: Cleaning up "skaffold-916977" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-916977
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-916977: (2.951016864s)
--- PASS: TestSkaffold (117.11s)

                                                
                                    
x
+
TestInsufficientStorage (10.61s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-791665 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-791665 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.381127619s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"255148ac-3b4d-4c70-9f54-c2e58804b601","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-791665] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dedebf57-715d-479c-83d6-4727cedb710a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18925"}}
	{"specversion":"1.0","id":"8c8f17f4-2a43-4a5a-b34c-0c81e35d5f71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"04117ec1-1029-475c-aea8-23852fb66ade","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig"}}
	{"specversion":"1.0","id":"d3c5f219-4eea-474b-9006-0447200b86a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube"}}
	{"specversion":"1.0","id":"87b582d9-bb5e-4bc2-b707-6abbcd7fb546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"840c9b6c-bd9d-41ca-aafa-8f2346e50e32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b879bc4c-a4ac-462e-b7da-6cd8e6155108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"93f117af-80c4-4da9-9f8a-e4c3e022d0e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bb53db63-b894-49b5-87ae-1ad4b038f67c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"83271941-a947-4cf6-b01a-de8e6f2eca39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"fc1132c1-0246-43d4-a358-f1d555e263e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-791665\" primary control-plane node in \"insufficient-storage-791665\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"36f92613-e161-4ae2-abdb-473764100639","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1715707529-18887 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1659bb1-8cbf-4bb8-9d72-1b5e227d3141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"bce0fd98-6ed3-4830-927f-6474b571fda6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-791665 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-791665 --output=json --layout=cluster: exit status 7 (274.75247ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-791665","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-791665","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:01:17.506734  208172 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-791665" does not appear in /home/jenkins/minikube-integration/18925-2151/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-791665 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-791665 --output=json --layout=cluster: exit status 7 (264.757912ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-791665","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-791665","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0520 11:01:17.772767  208225 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-791665" does not appear in /home/jenkins/minikube-integration/18925-2151/kubeconfig
	E0520 11:01:17.783051  208225 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/insufficient-storage-791665/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-791665" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-791665
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-791665: (1.68592829s)
--- PASS: TestInsufficientStorage (10.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (80.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1916650757 start -p running-upgrade-933292 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0520 11:06:56.358033    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 11:07:16.820497    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1916650757 start -p running-upgrade-933292 --memory=2200 --vm-driver=docker  --container-runtime=docker: (35.576622826s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-933292 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0520 11:07:50.217397    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-933292 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (41.615089147s)
helpers_test.go:175: Cleaning up "running-upgrade-933292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-933292
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-933292: (2.202438637s)
--- PASS: TestRunningBinaryUpgrade (80.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (371.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-472570 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0520 11:03:53.301493    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-472570 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (56.355578886s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-472570
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-472570: (1.292462416s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-472570 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-472570 status --format={{.Host}}: exit status 7 (67.681726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-472570 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-472570 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m47.443134164s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-472570 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-472570 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-472570 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (114.713394ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-472570] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-472570
	    minikube start -p kubernetes-upgrade-472570 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4725702 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-472570 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-472570 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-472570 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.296614646s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-472570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-472570
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-472570: (2.513739865s)
--- PASS: TestKubernetesUpgrade (371.18s)

                                                
                                    
x
+
TestMissingContainerUpgrade (148.74s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3964160630 start -p missing-upgrade-963551 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3964160630 start -p missing-upgrade-963551 --memory=2200 --driver=docker  --container-runtime=docker: (1m21.786594347s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-963551
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-963551: (10.429688872s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-963551
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-963551 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-963551 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (53.237247924s)
helpers_test.go:175: Cleaning up "missing-upgrade-963551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-963551
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-963551: (2.177638133s)
--- PASS: TestMissingContainerUpgrade (148.74s)

                                                
                                    
x
+
TestPause/serial/Start (94.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-318091 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0520 11:02:50.216846    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-318091 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m34.994279229s)
--- PASS: TestPause/serial/Start (94.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-318091 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-318091 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.613971393s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.63s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-318091 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-318091 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-318091 --output=json --layout=cluster: exit status 2 (486.227358ms)

                                                
                                                
-- stdout --
	{"Name":"pause-318091","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-318091","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-318091 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.09s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-318091 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-318091 --alsologtostderr -v=5: (1.0860712s)
--- PASS: TestPause/serial/PauseAgain (1.09s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.4s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-318091 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-318091 --alsologtostderr -v=5: (2.40270704s)
--- PASS: TestPause/serial/DeletePaused (2.40s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-318091
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-318091: exit status 1 (15.5986ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-318091: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (79.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3777982370 start -p stopped-upgrade-747795 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0520 11:05:54.891481    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:05:54.896918    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:05:54.907153    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:05:54.928214    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:05:54.968429    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:05:55.054644    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:05:55.215525    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:05:55.535655    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:05:56.176455    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:05:57.457214    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:06:00.017433    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:06:05.138341    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3777982370 start -p stopped-upgrade-747795 --memory=2200 --vm-driver=docker  --container-runtime=docker: (39.146700031s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3777982370 -p stopped-upgrade-747795 stop
E0520 11:06:15.379382    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3777982370 -p stopped-upgrade-747795 stop: (10.787039267s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-747795 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0520 11:06:35.859598    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-747795 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.187838902s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (79.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-747795
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-747795: (1.431694872s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-518329 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-518329 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (78.22724ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-518329] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18925
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18925-2151/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18925-2151/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-518329 --driver=docker  --container-runtime=docker
E0520 11:08:53.300870    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-518329 --driver=docker  --container-runtime=docker: (39.182901319s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-518329 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-518329 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-518329 --no-kubernetes --driver=docker  --container-runtime=docker: (14.813229535s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-518329 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-518329 status -o json: exit status 2 (348.957875ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-518329","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-518329
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-518329: (1.888897542s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-518329 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-518329 --no-kubernetes --driver=docker  --container-runtime=docker: (10.340340131s)
--- PASS: TestNoKubernetes/serial/Start (10.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-518329 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-518329 "sudo systemctl is-active --quiet service kubelet": exit status 1 (309.663944ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-518329
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-518329: (1.487976226s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-518329 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-518329 --driver=docker  --container-runtime=docker: (8.043493149s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-518329 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-518329 "sudo systemctl is-active --quiet service kubelet": exit status 1 (305.75705ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-879853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0520 11:12:50.217088    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-879853 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m24.870243389s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-879853 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c186d298-842b-4566-8e74-3ea681e61c2c] Pending
helpers_test.go:344: "busybox" [c186d298-842b-4566-8e74-3ea681e61c2c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c186d298-842b-4566-8e74-3ea681e61c2c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00444252s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-879853 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-753976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-753976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (53.400869313s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (53.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-879853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-879853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.496061164s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-879853 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-879853 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-879853 --alsologtostderr -v=3: (11.356578414s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-879853 -n old-k8s-version-879853
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-879853 -n old-k8s-version-879853: exit status 7 (81.871549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-879853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-753976 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fbaebab5-42ac-4de9-8441-780ae173bb16] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fbaebab5-42ac-4de9-8441-780ae173bb16] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003586091s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-753976 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-753976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-753976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.164180124s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-753976 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-753976 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-753976 --alsologtostderr -v=3: (11.238027606s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-753976 -n default-k8s-diff-port-753976
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-753976 -n default-k8s-diff-port-753976: exit status 7 (68.743944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-753976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-753976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
E0520 11:15:54.891826    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:17:33.263741    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 11:17:50.216563    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 11:18:53.301736    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-753976 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (4m48.93908021s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-753976 -n default-k8s-diff-port-753976
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (289.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-vbg6j" [77841031-3217-4aa0-9f4d-a71f8e0ce99e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003868528s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-vbg6j" [77841031-3217-4aa0-9f4d-a71f8e0ce99e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005442048s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-753976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-753976 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-753976 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-753976 -n default-k8s-diff-port-753976
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-753976 -n default-k8s-diff-port-753976: exit status 2 (305.883815ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-753976 -n default-k8s-diff-port-753976
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-753976 -n default-k8s-diff-port-753976: exit status 2 (311.539815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-753976 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-753976 -n default-k8s-diff-port-753976
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-753976 -n default-k8s-diff-port-753976
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (88.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-601362 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-601362 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (1m28.981717895s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (88.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mkd7n" [5c7129aa-dfd7-45c1-a642-0ae6b45b0328] Running
E0520 11:20:54.891751    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005767928s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-mkd7n" [5c7129aa-dfd7-45c1-a642-0ae6b45b0328] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004225532s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-879853 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-879853 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-879853 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-879853 -n old-k8s-version-879853
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-879853 -n old-k8s-version-879853: exit status 2 (481.524107ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-879853 -n old-k8s-version-879853
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-879853 -n old-k8s-version-879853: exit status 2 (445.495626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-879853 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-879853 -n old-k8s-version-879853
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-879853 -n old-k8s-version-879853
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-384622 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-384622 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (58.815748601s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-601362 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af774a6c-1d9a-4694-b59c-952db601098c] Pending
helpers_test.go:344: "busybox" [af774a6c-1d9a-4694-b59c-952db601098c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af774a6c-1d9a-4694-b59c-952db601098c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00446698s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-601362 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-384622 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [85861599-add3-43de-aed1-cc77fe89ac1d] Pending
helpers_test.go:344: "busybox" [85861599-add3-43de-aed1-cc77fe89ac1d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [85861599-add3-43de-aed1-cc77fe89ac1d] Running
E0520 11:22:17.942009    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004367345s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-384622 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-601362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-601362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.138395723s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-601362 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-384622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-384622 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.342670069s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-384622 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-601362 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-601362 --alsologtostderr -v=3: (11.014141072s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-384622 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-384622 --alsologtostderr -v=3: (11.348861574s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-601362 -n embed-certs-601362
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-601362 -n embed-certs-601362: exit status 7 (63.67707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-601362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (269.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-601362 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-601362 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (4m29.377676521s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-601362 -n embed-certs-601362
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (269.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-384622 -n no-preload-384622
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-384622 -n no-preload-384622: exit status 7 (88.176447ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-384622 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (272.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-384622 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
E0520 11:22:50.216364    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 11:23:36.359151    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 11:23:53.301556    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
E0520 11:24:15.115116    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:15.120489    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:15.130844    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:15.151111    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:15.191382    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:15.271698    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:15.432067    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:15.752568    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:16.393008    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:17.673998    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:20.234772    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:25.355388    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:35.596177    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:24:56.077021    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:25:11.963902    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:11.969244    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:11.979472    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:11.999755    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:12.040023    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:12.120350    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:12.280787    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:12.601556    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:13.242122    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:14.523074    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:17.083280    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:22.204188    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:32.444935    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:37.037378    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:25:52.926016    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:25:54.891585    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/skaffold-916977/client.crt: no such file or directory
E0520 11:26:33.886210    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
E0520 11:26:58.957591    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-384622 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (4m32.229954799s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-384622 -n no-preload-384622
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (272.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-n55q8" [3dd8ad9e-8dda-4248-8ca8-d98eb63b5044] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003590489s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-lmvr5" [b9939416-94b3-4516-90d7-35c17efb9641] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003827361s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-n55q8" [3dd8ad9e-8dda-4248-8ca8-d98eb63b5044] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004251957s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-601362 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-lmvr5" [b9939416-94b3-4516-90d7-35c17efb9641] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00324547s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-384622 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-601362 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-601362 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-601362 -n embed-certs-601362
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-601362 -n embed-certs-601362: exit status 2 (304.856502ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-601362 -n embed-certs-601362
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-601362 -n embed-certs-601362: exit status 2 (343.111026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-601362 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-601362 -n embed-certs-601362
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-601362 -n embed-certs-601362
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-384622 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-384622 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-384622 -n no-preload-384622
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-384622 -n no-preload-384622: exit status 2 (374.71133ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-384622 -n no-preload-384622
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-384622 -n no-preload-384622: exit status 2 (385.899476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-384622 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-384622 -n no-preload-384622
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-384622 -n no-preload-384622
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (56.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-715872 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-715872 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (56.934591136s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (56.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (55.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0520 11:27:50.216680    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 11:27:55.806412    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (55.82602428s)
--- PASS: TestNetworkPlugins/group/auto/Start (55.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-715872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-715872 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.287780148s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (6.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-715872 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-715872 --alsologtostderr -v=3: (6.063092024s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (6.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-563577 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-563577 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-q8kbh" [c1f9c35e-7f73-45f9-aed5-6f518f4294d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-q8kbh" [c1f9c35e-7f73-45f9-aed5-6f518f4294d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003994975s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-715872 -n newest-cni-715872
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-715872 -n newest-cni-715872: exit status 7 (82.447716ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-715872 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-715872 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-715872 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (18.714398681s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-715872 -n newest-cni-715872
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-563577 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-715872 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-715872 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-715872 -n newest-cni-715872
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-715872 -n newest-cni-715872: exit status 2 (401.921633ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-715872 -n newest-cni-715872
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-715872 -n newest-cni-715872: exit status 2 (416.452704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-715872 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-715872 -n newest-cni-715872
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-715872 -n newest-cni-715872
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.79s)
E0520 11:35:25.056607    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/calico-563577/client.crt: no such file or directory
E0520 11:35:27.617523    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/calico-563577/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0520 11:28:53.300732    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m11.879003536s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (83.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0520 11:29:15.114778    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
E0520 11:29:42.797791    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/old-k8s-version-879853/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m23.801663484s)
--- PASS: TestNetworkPlugins/group/calico/Start (83.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-t6gvb" [1b4b3d87-0135-467e-b5b8-5fbb7603bb32] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004444885s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-563577 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-563577 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2s4zv" [4636ff65-e80c-4d56-b9f1-799aa1fb1936] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-2s4zv" [4636ff65-e80c-4d56-b9f1-799aa1fb1936] Running
E0520 11:30:11.964587    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004093227s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-563577 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xg486" [4deec4dd-eba2-4e71-b895-6dfe702b68b8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005682449s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-563577 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-563577 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jnc5d" [bde5d87e-7949-491a-9033-cd52dc5d588f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jnc5d" [bde5d87e-7949-491a-9033-cd52dc5d588f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004475407s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0520 11:30:39.647500    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/default-k8s-diff-port-753976/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m9.492340312s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-563577 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (52.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (52.47792281s)
--- PASS: TestNetworkPlugins/group/false/Start (52.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-563577 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-563577 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7pgs6" [b118c6c4-261a-4d04-ba1f-a4725a7439d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7pgs6" [b118c6c4-261a-4d04-ba1f-a4725a7439d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00452956s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-563577 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-563577 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-563577 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hcjqh" [28dabafd-2b8c-4657-991c-f0507dc6fff3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hcjqh" [28dabafd-2b8c-4657-991c-f0507dc6fff3] Running
E0520 11:32:12.436207    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
E0520 11:32:12.441706    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
E0520 11:32:12.452164    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
E0520 11:32:12.474058    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
E0520 11:32:12.514430    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
E0520 11:32:12.595148    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.00442917s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-563577 exec deployment/netcat -- nslookup kubernetes.default
E0520 11:32:12.756130    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/false/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0520 11:32:13.076712    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0520 11:32:22.680090    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
E0520 11:32:32.921201    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (56.678697152s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0520 11:32:50.217213    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/addons-988376/client.crt: no such file or directory
E0520 11:32:53.401920    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m6.516654653s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-563577 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-563577 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-4brbw" [66cc7f65-f4f4-48e2-8dfd-670a12ab4ae1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0520 11:33:21.871467    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
E0520 11:33:21.876810    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
E0520 11:33:21.887079    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
E0520 11:33:21.907324    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
E0520 11:33:21.947566    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
E0520 11:33:22.027918    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
E0520 11:33:22.188574    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
E0520 11:33:22.509556    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-4brbw" [66cc7f65-f4f4-48e2-8dfd-670a12ab4ae1] Running
E0520 11:33:23.150247    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
E0520 11:33:24.430964    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
E0520 11:33:26.991174    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00425914s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-563577 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-57nz9" [506728ec-d32e-4eb2-924a-9264452563c2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00416855s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-563577 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-563577 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-bqsxx" [792af4f9-6d32-4362-8ca1-2e0ce4cc07c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-bqsxx" [792af4f9-6d32-4362-8ca1-2e0ce4cc07c9] Running
E0520 11:34:02.833533    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.006886047s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (59.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0520 11:33:53.300794    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/functional-660553/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (59.047529917s)
--- PASS: TestNetworkPlugins/group/bridge/Start (59.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-563577 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (53.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0520 11:34:43.794568    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/auto-563577/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-563577 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (53.779672834s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (53.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-563577 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-563577 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6996c" [018a7f67-3bfd-4c8d-8f9c-4326f10c4960] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6996c" [018a7f67-3bfd-4c8d-8f9c-4326f10c4960] Running
E0520 11:34:56.282868    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/no-preload-384622/client.crt: no such file or directory
E0520 11:35:00.115636    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/kindnet-563577/client.crt: no such file or directory
E0520 11:35:00.120905    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/kindnet-563577/client.crt: no such file or directory
E0520 11:35:00.131183    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/kindnet-563577/client.crt: no such file or directory
E0520 11:35:00.151441    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/kindnet-563577/client.crt: no such file or directory
E0520 11:35:00.191696    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/kindnet-563577/client.crt: no such file or directory
E0520 11:35:00.272414    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/kindnet-563577/client.crt: no such file or directory
E0520 11:35:00.432785    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/kindnet-563577/client.crt: no such file or directory
E0520 11:35:00.753728    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/kindnet-563577/client.crt: no such file or directory
E0520 11:35:01.394236    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/kindnet-563577/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004121007s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-563577 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-563577 "pgrep -a kubelet"
E0520 11:35:22.815209    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/calico-563577/client.crt: no such file or directory
E0520 11:35:23.135438    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/calico-563577/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-563577 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-z4n7x" [d2095025-f299-47f1-a11c-b4a9fed668ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0520 11:35:23.776310    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/calico-563577/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-z4n7x" [d2095025-f299-47f1-a11c-b4a9fed668ae] Running
E0520 11:35:32.738096    7512 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18925-2151/.minikube/profiles/calico-563577/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.003535289s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-563577 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-563577 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                    

Test skip (24/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-508495 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-508495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-508495
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-458669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-458669
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-563577 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-563577" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-563577

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-563577" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-563577"

                                                
                                                
----------------------- debugLogs end: cilium-563577 [took: 5.034222223s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-563577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-563577
--- SKIP: TestNetworkPlugins/group/cilium (5.26s)

                                                
                                    
Copied to clipboard