Test Report: Docker_Linux_docker_arm64 17206

                    
                      f478b3e95ad7f4002b1f24747b20ea33f6e08bc3:2023-11-28:32057
                    
                

Test fail (7/329)

x
+
TestAddons/parallel/Ingress (37.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-889952 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-889952 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-889952 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [81454512-bca1-4f47-953f-42060bddcbb8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [81454512-bca1-4f47-953f-42060bddcbb8] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.02239571s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-889952 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.059379282s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-889952 addons disable ingress-dns --alsologtostderr -v=1: (1.016860168s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-889952 addons disable ingress --alsologtostderr -v=1: (7.743332301s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-889952
helpers_test.go:235: (dbg) docker inspect addons-889952:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41fbec0dc2ad2f749de22abbfa8d4494e801fc2f2af4790c91f682f74e58d724",
	        "Created": "2023-11-27T23:26:01.325803164Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T23:26:01.67005393Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/41fbec0dc2ad2f749de22abbfa8d4494e801fc2f2af4790c91f682f74e58d724/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41fbec0dc2ad2f749de22abbfa8d4494e801fc2f2af4790c91f682f74e58d724/hostname",
	        "HostsPath": "/var/lib/docker/containers/41fbec0dc2ad2f749de22abbfa8d4494e801fc2f2af4790c91f682f74e58d724/hosts",
	        "LogPath": "/var/lib/docker/containers/41fbec0dc2ad2f749de22abbfa8d4494e801fc2f2af4790c91f682f74e58d724/41fbec0dc2ad2f749de22abbfa8d4494e801fc2f2af4790c91f682f74e58d724-json.log",
	        "Name": "/addons-889952",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-889952:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-889952",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e61266bc652125c7980bd63799aa1cfa7c164ad2d6886440eae819c5273c8e81-init/diff:/var/lib/docker/overlay2/150860364e38eff4c019c0f0be35917511a7a9583a74bcbfa76fc06522b5265f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e61266bc652125c7980bd63799aa1cfa7c164ad2d6886440eae819c5273c8e81/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e61266bc652125c7980bd63799aa1cfa7c164ad2d6886440eae819c5273c8e81/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e61266bc652125c7980bd63799aa1cfa7c164ad2d6886440eae819c5273c8e81/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-889952",
	                "Source": "/var/lib/docker/volumes/addons-889952/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-889952",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-889952",
	                "name.minikube.sigs.k8s.io": "addons-889952",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b295ed5f421024a28b851c35e691de79e35d233edfdcdd18dadbf8a035bb0b23",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/b295ed5f4210",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-889952": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "41fbec0dc2ad",
	                        "addons-889952"
	                    ],
	                    "NetworkID": "0c1ee0442af750c886602c1d5755675ccb8f7aa62d644bf2c3947889bd315db5",
	                    "EndpointID": "97e6a0f6468c9a632e329ea2435f5c3c62adf28ca6b3a99c68f992c379b9a9e1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-889952 -n addons-889952
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-889952 logs -n 25: (1.172280825s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-602899   | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |                     |
	|         | -p download-only-602899                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-602899   | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |                     |
	|         | -p download-only-602899                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                                                           |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC | 27 Nov 23 23:25 UTC |
	| delete  | -p download-only-602899                                                                     | download-only-602899   | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC | 27 Nov 23 23:25 UTC |
	| delete  | -p download-only-602899                                                                     | download-only-602899   | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC | 27 Nov 23 23:25 UTC |
	| start   | --download-only -p                                                                          | download-docker-856124 | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |                     |
	|         | download-docker-856124                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-856124                                                                   | download-docker-856124 | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC | 27 Nov 23 23:25 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-341512   | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |                     |
	|         | binary-mirror-341512                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46137                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-341512                                                                     | binary-mirror-341512   | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC | 27 Nov 23 23:25 UTC |
	| addons  | enable dashboard -p                                                                         | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |                     |
	|         | addons-889952                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |                     |
	|         | addons-889952                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-889952 --wait=true                                                                | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC | 27 Nov 23 23:28 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-889952 ip                                                                            | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	| addons  | addons-889952 addons disable                                                                | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-889952 addons                                                                        | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	|         | addons-889952                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-889952 ssh curl -s                                                                   | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-889952 addons                                                                        | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-889952 ip                                                                            | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	| addons  | addons-889952 addons                                                                        | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	|         | -p addons-889952                                                                            |                        |         |         |                     |                     |
	| addons  | addons-889952 addons disable                                                                | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:28 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-889952 addons disable                                                                | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:28 UTC | 27 Nov 23 23:29 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ssh     | addons-889952 ssh cat                                                                       | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:29 UTC | 27 Nov 23 23:29 UTC |
	|         | /opt/local-path-provisioner/pvc-330b5ec0-97af-4e44-ab94-121d2102abbe_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-889952 addons disable                                                                | addons-889952          | jenkins | v1.32.0 | 27 Nov 23 23:29 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:25:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:25:37.973468    8042 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:25:37.973665    8042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:25:37.973690    8042 out.go:309] Setting ErrFile to fd 2...
	I1127 23:25:37.973710    8042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:25:37.973991    8042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
	I1127 23:25:37.974470    8042 out.go:303] Setting JSON to false
	I1127 23:25:37.975231    8042 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":487,"bootTime":1701127051,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1127 23:25:37.975316    8042 start.go:138] virtualization:  
	I1127 23:25:37.979423    8042 out.go:177] * [addons-889952] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:25:37.981703    8042 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:25:37.984095    8042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:25:37.981829    8042 notify.go:220] Checking for updates...
	I1127 23:25:37.988965    8042 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	I1127 23:25:37.990859    8042 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	I1127 23:25:37.993100    8042 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1127 23:25:37.995125    8042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:25:37.997777    8042 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:25:38.021651    8042 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:25:38.021761    8042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:25:38.103089    8042 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-27 23:25:38.093373511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:25:38.103186    8042 docker.go:295] overlay module found
	I1127 23:25:38.106322    8042 out.go:177] * Using the docker driver based on user configuration
	I1127 23:25:38.108116    8042 start.go:298] selected driver: docker
	I1127 23:25:38.108131    8042 start.go:902] validating driver "docker" against <nil>
	I1127 23:25:38.108143    8042 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:25:38.108723    8042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:25:38.183045    8042 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-27 23:25:38.17397544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:25:38.183199    8042 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:25:38.183424    8042 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:25:38.185148    8042 out.go:177] * Using Docker driver with root privileges
	I1127 23:25:38.187207    8042 cni.go:84] Creating CNI manager for ""
	I1127 23:25:38.187231    8042 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1127 23:25:38.187243    8042 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1127 23:25:38.187258    8042 start_flags.go:323] config:
	{Name:addons-889952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-889952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:25:38.189473    8042 out.go:177] * Starting control plane node addons-889952 in cluster addons-889952
	I1127 23:25:38.191457    8042 cache.go:121] Beginning downloading kic base image for docker with docker
	I1127 23:25:38.193279    8042 out.go:177] * Pulling base image ...
	I1127 23:25:38.194954    8042 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1127 23:25:38.195003    8042 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1127 23:25:38.195016    8042 cache.go:56] Caching tarball of preloaded images
	I1127 23:25:38.195090    8042 preload.go:174] Found /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1127 23:25:38.195104    8042 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1127 23:25:38.195439    8042 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/config.json ...
	I1127 23:25:38.195461    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/config.json: {Name:mkcc6f70af4b2c52ca915278357f13bb4775c215 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:38.195598    8042 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:25:38.211754    8042 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:25:38.211868    8042 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:25:38.211890    8042 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1127 23:25:38.211898    8042 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1127 23:25:38.211905    8042 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 23:25:38.211916    8042 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 from local cache
	I1127 23:25:53.794501    8042 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 from cached tarball
	I1127 23:25:53.794542    8042 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:25:53.794590    8042 start.go:365] acquiring machines lock for addons-889952: {Name:mk5053d944891bd7bb52603f0fcf76dc63580590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:25:53.794700    8042 start.go:369] acquired machines lock for "addons-889952" in 88.886µs
	I1127 23:25:53.794727    8042 start.go:93] Provisioning new machine with config: &{Name:addons-889952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-889952 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1127 23:25:53.794815    8042 start.go:125] createHost starting for "" (driver="docker")
	I1127 23:25:53.796902    8042 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1127 23:25:53.797131    8042 start.go:159] libmachine.API.Create for "addons-889952" (driver="docker")
	I1127 23:25:53.797159    8042 client.go:168] LocalClient.Create starting
	I1127 23:25:53.797256    8042 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem
	I1127 23:25:54.842381    8042 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem
	I1127 23:25:55.204953    8042 cli_runner.go:164] Run: docker network inspect addons-889952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1127 23:25:55.223162    8042 cli_runner.go:211] docker network inspect addons-889952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1127 23:25:55.223237    8042 network_create.go:281] running [docker network inspect addons-889952] to gather additional debugging logs...
	I1127 23:25:55.223256    8042 cli_runner.go:164] Run: docker network inspect addons-889952
	W1127 23:25:55.239564    8042 cli_runner.go:211] docker network inspect addons-889952 returned with exit code 1
	I1127 23:25:55.239592    8042 network_create.go:284] error running [docker network inspect addons-889952]: docker network inspect addons-889952: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-889952 not found
	I1127 23:25:55.239604    8042 network_create.go:286] output of [docker network inspect addons-889952]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-889952 not found
	
	** /stderr **
	I1127 23:25:55.239684    8042 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:25:55.256503    8042 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002d64da0}
	I1127 23:25:55.256536    8042 network_create.go:124] attempt to create docker network addons-889952 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1127 23:25:55.256595    8042 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-889952 addons-889952
	I1127 23:25:55.323582    8042 network_create.go:108] docker network addons-889952 192.168.49.0/24 created
	I1127 23:25:55.323614    8042 kic.go:121] calculated static IP "192.168.49.2" for the "addons-889952" container
	I1127 23:25:55.323695    8042 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 23:25:55.339723    8042 cli_runner.go:164] Run: docker volume create addons-889952 --label name.minikube.sigs.k8s.io=addons-889952 --label created_by.minikube.sigs.k8s.io=true
	I1127 23:25:55.357133    8042 oci.go:103] Successfully created a docker volume addons-889952
	I1127 23:25:55.357214    8042 cli_runner.go:164] Run: docker run --rm --name addons-889952-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-889952 --entrypoint /usr/bin/test -v addons-889952:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 23:25:57.301075    8042 cli_runner.go:217] Completed: docker run --rm --name addons-889952-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-889952 --entrypoint /usr/bin/test -v addons-889952:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib: (1.943803698s)
	I1127 23:25:57.301103    8042 oci.go:107] Successfully prepared a docker volume addons-889952
	I1127 23:25:57.301133    8042 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1127 23:25:57.301151    8042 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 23:25:57.301231    8042 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-889952:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 23:26:01.242592    8042 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-889952:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (3.94131276s)
	I1127 23:26:01.242621    8042 kic.go:203] duration metric: took 3.941467 seconds to extract preloaded images to volume
	W1127 23:26:01.242742    8042 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 23:26:01.242873    8042 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 23:26:01.310717    8042 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-889952 --name addons-889952 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-889952 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-889952 --network addons-889952 --ip 192.168.49.2 --volume addons-889952:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:26:01.678079    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Running}}
	I1127 23:26:01.698846    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:01.725674    8042 cli_runner.go:164] Run: docker exec addons-889952 stat /var/lib/dpkg/alternatives/iptables
	I1127 23:26:01.785296    8042 oci.go:144] the created container "addons-889952" has a running status.
	I1127 23:26:01.785333    8042 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa...
	I1127 23:26:02.448207    8042 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 23:26:02.480811    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:02.509712    8042 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 23:26:02.509731    8042 kic_runner.go:114] Args: [docker exec --privileged addons-889952 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 23:26:02.606230    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:02.634233    8042 machine.go:88] provisioning docker machine ...
	I1127 23:26:02.634260    8042 ubuntu.go:169] provisioning hostname "addons-889952"
	I1127 23:26:02.634344    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:02.659869    8042 main.go:141] libmachine: Using SSH client type: native
	I1127 23:26:02.660280    8042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1127 23:26:02.660292    8042 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-889952 && echo "addons-889952" | sudo tee /etc/hostname
	I1127 23:26:02.823286    8042 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-889952
	
	I1127 23:26:02.823421    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:02.847907    8042 main.go:141] libmachine: Using SSH client type: native
	I1127 23:26:02.848288    8042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1127 23:26:02.848313    8042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-889952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-889952/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-889952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:26:02.982836    8042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:26:02.982864    8042 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-2172/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-2172/.minikube}
	I1127 23:26:02.982885    8042 ubuntu.go:177] setting up certificates
	I1127 23:26:02.982893    8042 provision.go:83] configureAuth start
	I1127 23:26:02.982950    8042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-889952
	I1127 23:26:03.000804    8042 provision.go:138] copyHostCerts
	I1127 23:26:03.000896    8042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-2172/.minikube/ca.pem (1078 bytes)
	I1127 23:26:03.001020    8042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-2172/.minikube/cert.pem (1123 bytes)
	I1127 23:26:03.001089    8042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-2172/.minikube/key.pem (1679 bytes)
	I1127 23:26:03.001136    8042 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-2172/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca-key.pem org=jenkins.addons-889952 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-889952]
	I1127 23:26:03.504782    8042 provision.go:172] copyRemoteCerts
	I1127 23:26:03.504853    8042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:26:03.504901    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:03.523374    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:03.616346    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:26:03.642749    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1127 23:26:03.670758    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1127 23:26:03.696440    8042 provision.go:86] duration metric: configureAuth took 713.534396ms
	I1127 23:26:03.696465    8042 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:26:03.696647    8042 config.go:182] Loaded profile config "addons-889952": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 23:26:03.696709    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:03.714229    8042 main.go:141] libmachine: Using SSH client type: native
	I1127 23:26:03.714643    8042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1127 23:26:03.714661    8042 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1127 23:26:03.839758    8042 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1127 23:26:03.839816    8042 ubuntu.go:71] root file system type: overlay
	I1127 23:26:03.839927    8042 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1127 23:26:03.839991    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:03.858735    8042 main.go:141] libmachine: Using SSH client type: native
	I1127 23:26:03.859148    8042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1127 23:26:03.859235    8042 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1127 23:26:03.999253    8042 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1127 23:26:03.999337    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:04.018087    8042 main.go:141] libmachine: Using SSH client type: native
	I1127 23:26:04.018599    8042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I1127 23:26:04.018624    8042 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1127 23:26:04.748953    8042 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-11-27 23:26:03.994014037 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1127 23:26:04.748984    8042 machine.go:91] provisioned docker machine in 2.114733236s
	I1127 23:26:04.748997    8042 client.go:171] LocalClient.Create took 10.951831467s
	I1127 23:26:04.749031    8042 start.go:167] duration metric: libmachine.API.Create for "addons-889952" took 10.951899668s
	I1127 23:26:04.749044    8042 start.go:300] post-start starting for "addons-889952" (driver="docker")
	I1127 23:26:04.749054    8042 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:26:04.749120    8042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:26:04.749165    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:04.767547    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:04.860315    8042 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:26:04.865113    8042 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:26:04.865152    8042 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:26:04.865163    8042 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:26:04.865170    8042 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 23:26:04.865182    8042 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-2172/.minikube/addons for local assets ...
	I1127 23:26:04.865240    8042 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-2172/.minikube/files for local assets ...
	I1127 23:26:04.865264    8042 start.go:303] post-start completed in 116.213724ms
	I1127 23:26:04.865555    8042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-889952
	I1127 23:26:04.883211    8042 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/config.json ...
	I1127 23:26:04.883447    8042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:26:04.883492    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:04.900213    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:04.991793    8042 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:26:04.996742    8042 start.go:128] duration metric: createHost completed in 11.201915197s
	I1127 23:26:04.996759    8042 start.go:83] releasing machines lock for "addons-889952", held for 11.202047555s
	I1127 23:26:04.996818    8042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-889952
	I1127 23:26:05.018457    8042 ssh_runner.go:195] Run: cat /version.json
	I1127 23:26:05.018508    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:05.018766    8042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:26:05.018819    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:05.037452    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:05.050475    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:05.268525    8042 ssh_runner.go:195] Run: systemctl --version
	I1127 23:26:05.273682    8042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:26:05.278578    8042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1127 23:26:05.304141    8042 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:26:05.304211    8042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 23:26:05.333975    8042 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1127 23:26:05.334007    8042 start.go:472] detecting cgroup driver to use...
	I1127 23:26:05.334035    8042 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:26:05.334163    8042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:26:05.351385    8042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1127 23:26:05.361666    8042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1127 23:26:05.371612    8042 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1127 23:26:05.371677    8042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1127 23:26:05.381834    8042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1127 23:26:05.391660    8042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1127 23:26:05.401953    8042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1127 23:26:05.412436    8042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:26:05.422143    8042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1127 23:26:05.432606    8042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:26:05.441856    8042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:26:05.450781    8042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:26:05.541533    8042 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1127 23:26:05.654756    8042 start.go:472] detecting cgroup driver to use...
	I1127 23:26:05.654860    8042 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:26:05.654945    8042 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1127 23:26:05.672235    8042 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1127 23:26:05.672340    8042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1127 23:26:05.686466    8042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:26:05.705785    8042 ssh_runner.go:195] Run: which cri-dockerd
	I1127 23:26:05.711338    8042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1127 23:26:05.720943    8042 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1127 23:26:05.741332    8042 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1127 23:26:05.841619    8042 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1127 23:26:05.943081    8042 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1127 23:26:05.943249    8042 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1127 23:26:05.963643    8042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:26:06.060276    8042 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1127 23:26:06.318461    8042 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1127 23:26:06.411152    8042 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1127 23:26:06.512404    8042 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1127 23:26:06.606525    8042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:26:06.700210    8042 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1127 23:26:06.715584    8042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:26:06.804154    8042 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1127 23:26:06.881259    8042 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1127 23:26:06.881417    8042 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1127 23:26:06.886088    8042 start.go:540] Will wait 60s for crictl version
	I1127 23:26:06.886177    8042 ssh_runner.go:195] Run: which crictl
	I1127 23:26:06.890426    8042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 23:26:06.944397    8042 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1127 23:26:06.944501    8042 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1127 23:26:06.969192    8042 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1127 23:26:06.998453    8042 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1127 23:26:06.998545    8042 cli_runner.go:164] Run: docker network inspect addons-889952 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:26:07.015742    8042 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1127 23:26:07.019894    8042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:26:07.032212    8042 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1127 23:26:07.032280    8042 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1127 23:26:07.052407    8042 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1127 23:26:07.052430    8042 docker.go:601] Images already preloaded, skipping extraction
	I1127 23:26:07.052488    8042 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1127 23:26:07.072624    8042 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1127 23:26:07.072647    8042 cache_images.go:84] Images are preloaded, skipping loading
	I1127 23:26:07.072709    8042 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1127 23:26:07.132889    8042 cni.go:84] Creating CNI manager for ""
	I1127 23:26:07.132916    8042 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1127 23:26:07.132927    8042 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:26:07.132948    8042 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-889952 NodeName:addons-889952 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 23:26:07.133078    8042 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-889952"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:26:07.133143    8042 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-889952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-889952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:26:07.133208    8042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 23:26:07.143293    8042 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:26:07.143359    8042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 23:26:07.152866    8042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1127 23:26:07.173793    8042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 23:26:07.192330    8042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1127 23:26:07.211148    8042 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1127 23:26:07.215276    8042 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:26:07.227693    8042 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952 for IP: 192.168.49.2
	I1127 23:26:07.227760    8042 certs.go:190] acquiring lock for shared ca certs: {Name:mkf476800f388ef5f0e09831530252d4aaf23bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:07.227887    8042 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17206-2172/.minikube/ca.key
	I1127 23:26:07.481230    8042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt ...
	I1127 23:26:07.481255    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt: {Name:mkde1e6207b7e160bd868fdf2b5e3cbefef01c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:07.481430    8042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-2172/.minikube/ca.key ...
	I1127 23:26:07.481444    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/ca.key: {Name:mk071408349b78b67c319f8cba60dd39c8f3ac9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:07.481523    8042 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.key
	I1127 23:26:07.763971    8042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.crt ...
	I1127 23:26:07.764000    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.crt: {Name:mk05cb4cc978543d864517643a9a18f8c1974bb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:07.764181    8042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.key ...
	I1127 23:26:07.764195    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.key: {Name:mkd70803f9940f8d7a22a2089fad6e6d7c360b35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:07.764321    8042 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.key
	I1127 23:26:07.764339    8042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt with IP's: []
	I1127 23:26:08.179800    8042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt ...
	I1127 23:26:08.179829    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: {Name:mk271ed2da7e42eecad00f6da9b59c99b1b45634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:08.180012    8042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.key ...
	I1127 23:26:08.180025    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.key: {Name:mk2f77f13fc12a62111701137ba044557edf18d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:08.180108    8042 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.key.dd3b5fb2
	I1127 23:26:08.180128    8042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 23:26:09.072691    8042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.crt.dd3b5fb2 ...
	I1127 23:26:09.072723    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.crt.dd3b5fb2: {Name:mkdd5e460ba61ec47ace8ff565ca9749e25a7f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:09.072907    8042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.key.dd3b5fb2 ...
	I1127 23:26:09.072923    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.key.dd3b5fb2: {Name:mk1ba421c9ecb049ffce9a5a7c60868f80c33e04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:09.073005    8042 certs.go:337] copying /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.crt
	I1127 23:26:09.073079    8042 certs.go:341] copying /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.key
	I1127 23:26:09.073135    8042 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/proxy-client.key
	I1127 23:26:09.073153    8042 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/proxy-client.crt with IP's: []
	I1127 23:26:09.408560    8042 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/proxy-client.crt ...
	I1127 23:26:09.408589    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/proxy-client.crt: {Name:mk328c67367e24401c55183c2452f4458163c3cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:09.408761    8042 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/proxy-client.key ...
	I1127 23:26:09.408773    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/proxy-client.key: {Name:mka4d5d284e4ed2df616e58314fbe65988ced53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:09.408957    8042 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 23:26:09.409000    8042 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:26:09.409030    8042 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:26:09.409076    8042 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/key.pem (1679 bytes)
	I1127 23:26:09.409725    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 23:26:09.436851    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1127 23:26:09.462118    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 23:26:09.487019    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1127 23:26:09.512533    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:26:09.537902    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1127 23:26:09.563178    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:26:09.588462    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1127 23:26:09.614213    8042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:26:09.640094    8042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 23:26:09.659901    8042 ssh_runner.go:195] Run: openssl version
	I1127 23:26:09.666637    8042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:26:09.677383    8042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:26:09.681710    8042 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:26:09.681786    8042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:26:09.689594    8042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:26:09.700189    8042 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:26:09.704120    8042 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:26:09.704162    8042 kubeadm.go:404] StartCluster: {Name:addons-889952 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-889952 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:26:09.704276    8042 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1127 23:26:09.723361    8042 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 23:26:09.733382    8042 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 23:26:09.743068    8042 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1127 23:26:09.743126    8042 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 23:26:09.752643    8042 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:26:09.752679    8042 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1127 23:26:09.805474    8042 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1127 23:26:09.805871    8042 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 23:26:09.861792    8042 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:26:09.861868    8042 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1127 23:26:09.861918    8042 kubeadm.go:322] OS: Linux
	I1127 23:26:09.861966    8042 kubeadm.go:322] CGROUPS_CPU: enabled
	I1127 23:26:09.862024    8042 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1127 23:26:09.862078    8042 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1127 23:26:09.862134    8042 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1127 23:26:09.862184    8042 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1127 23:26:09.862243    8042 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1127 23:26:09.862298    8042 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1127 23:26:09.862376    8042 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1127 23:26:09.862427    8042 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1127 23:26:09.942284    8042 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:26:09.942414    8042 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:26:09.942513    8042 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:26:10.260849    8042 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:26:10.265846    8042 out.go:204]   - Generating certificates and keys ...
	I1127 23:26:10.265998    8042 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 23:26:10.266073    8042 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 23:26:11.038713    8042 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:26:11.279396    8042 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:26:11.562973    8042 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 23:26:11.989237    8042 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 23:26:12.226502    8042 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 23:26:12.226869    8042 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-889952 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:26:12.847989    8042 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 23:26:12.848474    8042 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-889952 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:26:13.131489    8042 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:26:13.763147    8042 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:26:13.881019    8042 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 23:26:13.881357    8042 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:26:14.194548    8042 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:26:14.732467    8042 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:26:15.184274    8042 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:26:15.740114    8042 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:26:15.740927    8042 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:26:15.743465    8042 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:26:15.745419    8042 out.go:204]   - Booting up control plane ...
	I1127 23:26:15.745552    8042 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:26:15.745634    8042 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:26:15.746415    8042 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:26:15.761352    8042 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:26:15.762082    8042 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:26:15.762352    8042 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 23:26:15.876000    8042 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:26:22.878041    8042 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002499 seconds
	I1127 23:26:22.878158    8042 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:26:22.892288    8042 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:26:23.418206    8042 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:26:23.418424    8042 kubeadm.go:322] [mark-control-plane] Marking the node addons-889952 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 23:26:23.928592    8042 kubeadm.go:322] [bootstrap-token] Using token: h6iwml.38w15b68z2fygn64
	I1127 23:26:23.930931    8042 out.go:204]   - Configuring RBAC rules ...
	I1127 23:26:23.931044    8042 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:26:23.935134    8042 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:26:23.941619    8042 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:26:23.946169    8042 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:26:23.949147    8042 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:26:23.952132    8042 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:26:23.963669    8042 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:26:24.180364    8042 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 23:26:24.343199    8042 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 23:26:24.344734    8042 kubeadm.go:322] 
	I1127 23:26:24.344810    8042 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 23:26:24.344820    8042 kubeadm.go:322] 
	I1127 23:26:24.344893    8042 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 23:26:24.344901    8042 kubeadm.go:322] 
	I1127 23:26:24.344925    8042 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 23:26:24.345415    8042 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:26:24.345471    8042 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:26:24.345482    8042 kubeadm.go:322] 
	I1127 23:26:24.345534    8042 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1127 23:26:24.345544    8042 kubeadm.go:322] 
	I1127 23:26:24.345589    8042 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 23:26:24.345597    8042 kubeadm.go:322] 
	I1127 23:26:24.345646    8042 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 23:26:24.345724    8042 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:26:24.345792    8042 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:26:24.345800    8042 kubeadm.go:322] 
	I1127 23:26:24.346100    8042 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:26:24.346183    8042 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 23:26:24.346193    8042 kubeadm.go:322] 
	I1127 23:26:24.346511    8042 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token h6iwml.38w15b68z2fygn64 \
	I1127 23:26:24.346613    8042 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f4690ca77095961f6bb42e5114ae321e899e29e7a594db1af8b49ab63220abf \
	I1127 23:26:24.346827    8042 kubeadm.go:322] 	--control-plane 
	I1127 23:26:24.346844    8042 kubeadm.go:322] 
	I1127 23:26:24.347163    8042 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:26:24.347173    8042 kubeadm.go:322] 
	I1127 23:26:24.347502    8042 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token h6iwml.38w15b68z2fygn64 \
	I1127 23:26:24.347829    8042 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0f4690ca77095961f6bb42e5114ae321e899e29e7a594db1af8b49ab63220abf 
	I1127 23:26:24.351539    8042 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1127 23:26:24.351644    8042 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:26:24.351659    8042 cni.go:84] Creating CNI manager for ""
	I1127 23:26:24.351682    8042 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1127 23:26:24.355785    8042 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1127 23:26:24.357961    8042 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1127 23:26:24.376958    8042 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1127 23:26:24.407602    8042 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 23:26:24.407722    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:24.407799    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=addons-889952 minikube.k8s.io/updated_at=2023_11_27T23_26_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:24.695686    8042 ops.go:34] apiserver oom_adj: -16
	I1127 23:26:24.695839    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:24.788640    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:25.383958    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:25.884307    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:26.383712    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:26.883776    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:27.384103    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:27.883842    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:28.384194    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:28.884163    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:29.383434    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:29.883430    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:30.384270    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:30.883580    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:31.384112    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:31.883422    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:32.383367    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:32.884234    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:33.383420    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:33.883425    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:34.383410    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:34.884125    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:35.383422    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:35.883955    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:36.383591    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:36.883432    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:37.383509    8042 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:26:37.471432    8042 kubeadm.go:1081] duration metric: took 13.06374384s to wait for elevateKubeSystemPrivileges.
	I1127 23:26:37.471462    8042 kubeadm.go:406] StartCluster complete in 27.767302205s
	I1127 23:26:37.471478    8042 settings.go:142] acquiring lock: {Name:mk0fc8a58a3a281d2b922894958c44e0a802f6f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:37.471607    8042 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-2172/kubeconfig
	I1127 23:26:37.472081    8042 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/kubeconfig: {Name:mk7ba64d42902767d9bc759b2ed9230b4474c63d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:26:37.472401    8042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 23:26:37.472648    8042 config.go:182] Loaded profile config "addons-889952": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 23:26:37.472683    8042 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1127 23:26:37.472786    8042 addons.go:69] Setting volumesnapshots=true in profile "addons-889952"
	I1127 23:26:37.472819    8042 addons.go:231] Setting addon volumesnapshots=true in "addons-889952"
	I1127 23:26:37.472870    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.473331    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.473457    8042 addons.go:69] Setting ingress-dns=true in profile "addons-889952"
	I1127 23:26:37.473474    8042 addons.go:231] Setting addon ingress-dns=true in "addons-889952"
	I1127 23:26:37.473508    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.473885    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.476316    8042 addons.go:69] Setting cloud-spanner=true in profile "addons-889952"
	I1127 23:26:37.476351    8042 addons.go:231] Setting addon cloud-spanner=true in "addons-889952"
	I1127 23:26:37.476386    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.476457    8042 addons.go:69] Setting inspektor-gadget=true in profile "addons-889952"
	I1127 23:26:37.476477    8042 addons.go:231] Setting addon inspektor-gadget=true in "addons-889952"
	I1127 23:26:37.476534    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.476783    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.476938    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.477150    8042 addons.go:69] Setting metrics-server=true in profile "addons-889952"
	I1127 23:26:37.477165    8042 addons.go:231] Setting addon metrics-server=true in "addons-889952"
	I1127 23:26:37.477196    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.477559    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.477699    8042 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-889952"
	I1127 23:26:37.477750    8042 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-889952"
	I1127 23:26:37.477790    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.478170    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.482485    8042 addons.go:69] Setting default-storageclass=true in profile "addons-889952"
	I1127 23:26:37.482509    8042 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-889952"
	I1127 23:26:37.482789    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.487850    8042 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-889952"
	I1127 23:26:37.487884    8042 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-889952"
	I1127 23:26:37.487927    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.488347    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.494592    8042 addons.go:69] Setting gcp-auth=true in profile "addons-889952"
	I1127 23:26:37.494623    8042 mustload.go:65] Loading cluster: addons-889952
	I1127 23:26:37.494810    8042 config.go:182] Loaded profile config "addons-889952": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 23:26:37.495052    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.502845    8042 addons.go:69] Setting registry=true in profile "addons-889952"
	I1127 23:26:37.502875    8042 addons.go:231] Setting addon registry=true in "addons-889952"
	I1127 23:26:37.502923    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.503415    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.518447    8042 addons.go:69] Setting ingress=true in profile "addons-889952"
	I1127 23:26:37.518489    8042 addons.go:231] Setting addon ingress=true in "addons-889952"
	I1127 23:26:37.518543    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.519015    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.527827    8042 addons.go:69] Setting storage-provisioner=true in profile "addons-889952"
	I1127 23:26:37.527860    8042 addons.go:231] Setting addon storage-provisioner=true in "addons-889952"
	I1127 23:26:37.527912    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.528343    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.570547    8042 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-889952"
	I1127 23:26:37.570623    8042 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-889952"
	I1127 23:26:37.570972    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.642552    8042 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1127 23:26:37.646460    8042 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1127 23:26:37.646517    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1127 23:26:37.646610    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:37.691680    8042 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1127 23:26:37.693799    8042 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1127 23:26:37.695780    8042 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 23:26:37.695797    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1127 23:26:37.695853    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:37.702391    8042 out.go:177]   - Using image docker.io/registry:2.8.3
	I1127 23:26:37.705129    8042 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1127 23:26:37.710393    8042 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1127 23:26:37.712769    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1127 23:26:37.712884    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:37.720659    8042 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1127 23:26:37.722393    8042 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1127 23:26:37.724132    8042 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1127 23:26:37.720613    8042 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1127 23:26:37.720619    8042 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1127 23:26:37.720624    8042 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1127 23:26:37.721635    8042 addons.go:231] Setting addon default-storageclass=true in "addons-889952"
	I1127 23:26:37.730577    8042 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-889952" context rescaled to 1 replicas
	I1127 23:26:37.732412    8042 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1127 23:26:37.732418    8042 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:26:37.732450    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.732477    8042 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1127 23:26:37.736353    8042 out.go:177] * Verifying Kubernetes components...
	I1127 23:26:37.734466    8042 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1127 23:26:37.734500    8042 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1127 23:26:37.734558    8042 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 23:26:37.735067    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.737394    8042 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-889952"
	I1127 23:26:37.740729    8042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:26:37.740748    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1127 23:26:37.741678    8042 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:26:37.741697    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1127 23:26:37.743478    8042 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:26:37.743507    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.743565    8042 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1127 23:26:37.743823    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:37.753544    8042 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:26:37.745934    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:37.745946    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 23:26:37.746776    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:37.746783    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1127 23:26:37.769752    8042 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1127 23:26:37.771553    8042 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1127 23:26:37.771570    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1127 23:26:37.771637    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:37.784528    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:37.755533    8042 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 23:26:37.786938    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1127 23:26:37.787010    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:37.755541    8042 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1127 23:26:37.757483    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:37.846485    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:37.863760    8042 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1127 23:26:37.873046    8042 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1127 23:26:37.891537    8042 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1127 23:26:37.891595    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1127 23:26:37.891679    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:37.895974    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:37.902532    8042 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 23:26:37.902577    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 23:26:37.902651    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:37.909202    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:37.944166    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:37.977084    8042 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 23:26:37.977886    8042 node_ready.go:35] waiting up to 6m0s for node "addons-889952" to be "Ready" ...
	I1127 23:26:37.980218    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:38.019623    8042 node_ready.go:49] node "addons-889952" has status "Ready":"True"
	I1127 23:26:38.019644    8042 node_ready.go:38] duration metric: took 39.646929ms waiting for node "addons-889952" to be "Ready" ...
	I1127 23:26:38.019655    8042 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:26:38.044564    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:38.052888    8042 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace to be "Ready" ...
	I1127 23:26:38.094181    8042 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1127 23:26:38.092661    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:38.103265    8042 out.go:177]   - Using image docker.io/busybox:stable
	I1127 23:26:38.106504    8042 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 23:26:38.106528    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1127 23:26:38.106586    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:38.106715    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:38.103392    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:38.105023    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:38.105068    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:38.128891    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:38.153274    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:38.437780    8042 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1127 23:26:38.437804    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1127 23:26:38.452111    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1127 23:26:38.600704    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1127 23:26:38.646259    8042 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1127 23:26:38.646282    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1127 23:26:38.718866    8042 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1127 23:26:38.718891    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1127 23:26:38.739115    8042 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1127 23:26:38.739139    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1127 23:26:38.765252    8042 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1127 23:26:38.765276    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1127 23:26:38.792595    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1127 23:26:38.809740    8042 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1127 23:26:38.809765    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1127 23:26:38.864543    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1127 23:26:38.878226    8042 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1127 23:26:38.878247    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1127 23:26:38.940422    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1127 23:26:38.940669    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 23:26:38.976113    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:26:39.036568    8042 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1127 23:26:39.036592    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1127 23:26:39.052430    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1127 23:26:39.112502    8042 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1127 23:26:39.112527    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1127 23:26:39.116855    8042 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1127 23:26:39.116916    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1127 23:26:39.236171    8042 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1127 23:26:39.236197    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1127 23:26:39.370015    8042 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 23:26:39.370040    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1127 23:26:39.383729    8042 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1127 23:26:39.383753    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1127 23:26:39.489112    8042 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1127 23:26:39.489138    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1127 23:26:39.496365    8042 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1127 23:26:39.496387    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1127 23:26:39.712873    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 23:26:39.784091    8042 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1127 23:26:39.784118    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1127 23:26:39.831387    8042 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1127 23:26:39.831416    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1127 23:26:39.907516    8042 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1127 23:26:39.907543    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1127 23:26:40.085768    8042 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1127 23:26:40.085794    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1127 23:26:40.144256    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:40.227690    8042 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1127 23:26:40.227760    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1127 23:26:40.298851    8042 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1127 23:26:40.298917    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1127 23:26:40.401236    8042 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:26:40.401308    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1127 23:26:40.527088    8042 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1127 23:26:40.527167    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1127 23:26:40.535193    8042 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1127 23:26:40.535256    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1127 23:26:40.667613    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:26:40.780892    8042 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1127 23:26:40.780961    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1127 23:26:40.803353    8042 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 23:26:40.803411    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1127 23:26:40.938831    8042 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 23:26:40.938899    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1127 23:26:40.987937    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1127 23:26:41.091275    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1127 23:26:41.350628    8042 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.3713122s)
	I1127 23:26:41.350708    8042 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1127 23:26:41.631468    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.179322344s)
	I1127 23:26:42.147377    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:43.086507    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.485761652s)
	I1127 23:26:43.086732    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.294111653s)
	I1127 23:26:44.227908    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:44.520409    8042 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1127 23:26:44.520528    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:44.561135    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:45.229820    8042 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1127 23:26:45.416904    8042 addons.go:231] Setting addon gcp-auth=true in "addons-889952"
	I1127 23:26:45.416952    8042 host.go:66] Checking if "addons-889952" exists ...
	I1127 23:26:45.417422    8042 cli_runner.go:164] Run: docker container inspect addons-889952 --format={{.State.Status}}
	I1127 23:26:45.451102    8042 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1127 23:26:45.451152    8042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-889952
	I1127 23:26:45.475776    8042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/addons-889952/id_rsa Username:docker}
	I1127 23:26:46.430799    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.566219333s)
	I1127 23:26:46.430835    8042 addons.go:467] Verifying addon ingress=true in "addons-889952"
	I1127 23:26:46.434192    8042 out.go:177] * Verifying ingress addon...
	I1127 23:26:46.431050    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.490602091s)
	I1127 23:26:46.431072    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.490386878s)
	I1127 23:26:46.431115    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.454942189s)
	I1127 23:26:46.431142    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.378686906s)
	I1127 23:26:46.431192    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.718293167s)
	I1127 23:26:46.431280    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.763594984s)
	I1127 23:26:46.431341    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.443335541s)
	I1127 23:26:46.436470    8042 addons.go:467] Verifying addon registry=true in "addons-889952"
	I1127 23:26:46.438532    8042 out.go:177] * Verifying registry addon...
	I1127 23:26:46.437045    8042 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1127 23:26:46.437062    8042 addons.go:467] Verifying addon metrics-server=true in "addons-889952"
	W1127 23:26:46.437082    8042 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 23:26:46.440210    8042 retry.go:31] will retry after 141.941515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1127 23:26:46.440951    8042 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1127 23:26:46.449991    8042 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1127 23:26:46.450014    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:46.460303    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:46.461086    8042 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1127 23:26:46.461101    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:46.466510    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:46.583301    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1127 23:26:46.651793    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:46.967202    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:46.974472    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:47.469606    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:47.491139    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:47.906789    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.81541911s)
	I1127 23:26:47.906818    8042 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-889952"
	I1127 23:26:47.911340    8042 out.go:177] * Verifying csi-hostpath-driver addon...
	I1127 23:26:47.907034    8042 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.455911653s)
	I1127 23:26:47.916017    8042 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1127 23:26:47.914761    8042 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1127 23:26:47.919813    8042 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1127 23:26:47.921916    8042 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1127 23:26:47.921934    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1127 23:26:47.924884    8042 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1127 23:26:47.924904    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:47.930066    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:47.971354    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:47.974008    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:47.986728    8042 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1127 23:26:47.986748    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1127 23:26:48.038764    8042 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 23:26:48.038784    8042 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1127 23:26:48.088272    8042 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1127 23:26:48.435672    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:48.465894    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:48.470798    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:48.737300    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.153954861s)
	I1127 23:26:48.935924    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:48.965055    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:48.971404    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:49.144044    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:49.453265    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:49.476031    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:49.476611    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:49.483256    8042 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.394948074s)
	I1127 23:26:49.484796    8042 addons.go:467] Verifying addon gcp-auth=true in "addons-889952"
	I1127 23:26:49.486852    8042 out.go:177] * Verifying gcp-auth addon...
	I1127 23:26:49.490053    8042 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1127 23:26:49.499457    8042 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1127 23:26:49.499478    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:49.505518    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:49.936057    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:49.965385    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:49.971305    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:50.009707    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:50.437285    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:50.464802    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:50.470910    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:50.509565    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:50.935672    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:50.964344    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:50.971336    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:51.009768    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:51.435652    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:51.475519    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:51.476587    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:51.509112    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:51.644188    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:51.936348    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:51.964471    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:51.971420    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:52.010210    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:52.435685    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:52.465155    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:52.471453    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:52.509731    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:52.936514    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:52.965263    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:52.971283    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:53.009328    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:53.437557    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:53.464827    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:53.471723    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:53.510011    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:53.936742    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:53.964523    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:53.970164    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:54.009845    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:54.144645    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:54.435925    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:54.464430    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:54.470459    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:54.509374    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:54.936151    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:54.965064    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:54.970772    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:55.009787    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:55.437801    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:55.474683    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:55.476048    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:55.510222    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:55.935718    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:55.964686    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:55.970702    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:56.009303    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:56.436128    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:56.471883    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:56.485399    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:56.510439    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:56.646090    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:56.935474    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:56.964634    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:56.970775    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:57.009511    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:57.435486    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:57.467536    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:57.470899    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:57.509251    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:57.935081    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:57.965710    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:57.971493    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:58.008925    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:58.435411    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:58.465821    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:58.471369    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:58.510166    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:58.936170    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:58.965360    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:58.971828    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:59.009286    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:59.144442    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:26:59.437368    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:59.468619    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:59.471949    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:26:59.509877    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:26:59.935639    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:26:59.964579    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:26:59.970235    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:27:00.009497    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:00.436869    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:00.465152    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:00.471616    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1127 23:27:00.509049    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:00.936899    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:00.965553    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:00.971217    8042 kapi.go:107] duration metric: took 14.530263534s to wait for kubernetes.io/minikube-addons=registry ...
	I1127 23:27:01.009806    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:01.144622    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:27:01.436403    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:01.465888    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:01.509298    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:01.935312    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:01.964967    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:02.009849    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:02.435934    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:02.465032    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:02.509076    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:02.936969    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:02.965325    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:03.009213    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:03.144982    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:27:03.436619    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:03.466181    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:03.509833    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:03.936009    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:03.965184    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:04.009537    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:04.435455    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:04.464778    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:04.509125    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:04.936358    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:04.964114    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:05.009378    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:05.437222    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:05.466141    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:05.509618    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:05.644070    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:27:05.948037    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:05.968399    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:06.010030    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:06.434955    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:06.468904    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:06.509488    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:06.936247    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:06.965911    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:07.009453    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:07.435846    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:07.464745    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:07.512158    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:07.937348    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:07.964134    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:08.009559    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:08.143906    8042 pod_ready.go:102] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"False"
	I1127 23:27:08.435949    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:08.464966    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:08.509407    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:08.644006    8042 pod_ready.go:92] pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace has status "Ready":"True"
	I1127 23:27:08.644030    8042 pod_ready.go:81] duration metric: took 30.591116563s waiting for pod "coredns-5dd5756b68-gg8kv" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:08.644041    8042 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-889952" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:08.649548    8042 pod_ready.go:92] pod "etcd-addons-889952" in "kube-system" namespace has status "Ready":"True"
	I1127 23:27:08.649612    8042 pod_ready.go:81] duration metric: took 5.552415ms waiting for pod "etcd-addons-889952" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:08.649636    8042 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-889952" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:08.655509    8042 pod_ready.go:92] pod "kube-apiserver-addons-889952" in "kube-system" namespace has status "Ready":"True"
	I1127 23:27:08.655570    8042 pod_ready.go:81] duration metric: took 5.909874ms waiting for pod "kube-apiserver-addons-889952" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:08.655595    8042 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-889952" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:08.661214    8042 pod_ready.go:92] pod "kube-controller-manager-addons-889952" in "kube-system" namespace has status "Ready":"True"
	I1127 23:27:08.661272    8042 pod_ready.go:81] duration metric: took 5.655998ms waiting for pod "kube-controller-manager-addons-889952" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:08.661298    8042 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6vqwc" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:08.667638    8042 pod_ready.go:92] pod "kube-proxy-6vqwc" in "kube-system" namespace has status "Ready":"True"
	I1127 23:27:08.667659    8042 pod_ready.go:81] duration metric: took 6.340473ms waiting for pod "kube-proxy-6vqwc" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:08.667669    8042 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-889952" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:08.935563    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:08.964296    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:09.010094    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:09.042140    8042 pod_ready.go:92] pod "kube-scheduler-addons-889952" in "kube-system" namespace has status "Ready":"True"
	I1127 23:27:09.042164    8042 pod_ready.go:81] duration metric: took 374.488064ms waiting for pod "kube-scheduler-addons-889952" in "kube-system" namespace to be "Ready" ...
	I1127 23:27:09.042174    8042 pod_ready.go:38] duration metric: took 31.022509099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:27:09.042191    8042 api_server.go:52] waiting for apiserver process to appear ...
	I1127 23:27:09.042258    8042 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:27:09.059384    8042 api_server.go:72] duration metric: took 31.324993732s to wait for apiserver process to appear ...
	I1127 23:27:09.059410    8042 api_server.go:88] waiting for apiserver healthz status ...
	I1127 23:27:09.059427    8042 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1127 23:27:09.070353    8042 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1127 23:27:09.071749    8042 api_server.go:141] control plane version: v1.28.4
	I1127 23:27:09.071767    8042 api_server.go:131] duration metric: took 12.350344ms to wait for apiserver health ...
	I1127 23:27:09.071774    8042 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 23:27:09.249696    8042 system_pods.go:59] 17 kube-system pods found
	I1127 23:27:09.249774    8042 system_pods.go:61] "coredns-5dd5756b68-gg8kv" [527848d6-6bef-4368-beb1-d4a4c17d7931] Running
	I1127 23:27:09.249798    8042 system_pods.go:61] "csi-hostpath-attacher-0" [28740c72-a369-48eb-bbb7-cdda1e5fa65c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1127 23:27:09.249822    8042 system_pods.go:61] "csi-hostpath-resizer-0" [ed457fb8-1e50-418d-b00f-2adc9f859ba2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1127 23:27:09.249865    8042 system_pods.go:61] "csi-hostpathplugin-vq9gx" [60f63aa5-21b1-4ff9-a47f-cd66827837fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1127 23:27:09.249886    8042 system_pods.go:61] "etcd-addons-889952" [3f39cd95-0240-4033-9de3-9c5a8eddf948] Running
	I1127 23:27:09.249911    8042 system_pods.go:61] "kube-apiserver-addons-889952" [f755434d-1e51-4af4-b035-25727e548ca4] Running
	I1127 23:27:09.249944    8042 system_pods.go:61] "kube-controller-manager-addons-889952" [fabd0a60-eb6e-48d0-bd33-82a3b316ea3f] Running
	I1127 23:27:09.249975    8042 system_pods.go:61] "kube-ingress-dns-minikube" [f25cb5f9-6368-410a-8d5a-cbade95ccf8e] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1127 23:27:09.250000    8042 system_pods.go:61] "kube-proxy-6vqwc" [1aaa5c60-99c2-4e73-9c47-56c26b7f14f3] Running
	I1127 23:27:09.250028    8042 system_pods.go:61] "kube-scheduler-addons-889952" [03d3ba5a-1222-4284-a654-7a2e150b26a0] Running
	I1127 23:27:09.250066    8042 system_pods.go:61] "metrics-server-7c66d45ddc-5lpw9" [21ff43a6-cc55-40ff-90c0-8b20cc3d2980] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 23:27:09.250097    8042 system_pods.go:61] "nvidia-device-plugin-daemonset-b7q6h" [39b12e34-7535-44fe-8b38-80c87e242bff] Running
	I1127 23:27:09.250122    8042 system_pods.go:61] "registry-5mgs4" [faf11a16-62ba-4bff-9bb0-cbe6f73bcf87] Running
	I1127 23:27:09.250147    8042 system_pods.go:61] "registry-proxy-d7k5l" [314907ab-51ee-48cd-ab69-01ae6063dd8d] Running
	I1127 23:27:09.250187    8042 system_pods.go:61] "snapshot-controller-58dbcc7b99-bxnbr" [026e27d5-c2b2-4766-8ff6-bc635016f586] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1127 23:27:09.250217    8042 system_pods.go:61] "snapshot-controller-58dbcc7b99-jp46j" [dcc59704-4e65-43c4-8e28-cf0fbf8ca01a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1127 23:27:09.250242    8042 system_pods.go:61] "storage-provisioner" [2f42e337-eb66-47ff-b702-333d29a2a2df] Running
	I1127 23:27:09.250266    8042 system_pods.go:74] duration metric: took 178.484536ms to wait for pod list to return data ...
	I1127 23:27:09.250297    8042 default_sa.go:34] waiting for default service account to be created ...
	I1127 23:27:09.436513    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:09.440863    8042 default_sa.go:45] found service account: "default"
	I1127 23:27:09.440884    8042 default_sa.go:55] duration metric: took 190.545951ms for default service account to be created ...
	I1127 23:27:09.440894    8042 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 23:27:09.465053    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:09.509623    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:09.647965    8042 system_pods.go:86] 17 kube-system pods found
	I1127 23:27:09.647997    8042 system_pods.go:89] "coredns-5dd5756b68-gg8kv" [527848d6-6bef-4368-beb1-d4a4c17d7931] Running
	I1127 23:27:09.648008    8042 system_pods.go:89] "csi-hostpath-attacher-0" [28740c72-a369-48eb-bbb7-cdda1e5fa65c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1127 23:27:09.648016    8042 system_pods.go:89] "csi-hostpath-resizer-0" [ed457fb8-1e50-418d-b00f-2adc9f859ba2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1127 23:27:09.648025    8042 system_pods.go:89] "csi-hostpathplugin-vq9gx" [60f63aa5-21b1-4ff9-a47f-cd66827837fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1127 23:27:09.648031    8042 system_pods.go:89] "etcd-addons-889952" [3f39cd95-0240-4033-9de3-9c5a8eddf948] Running
	I1127 23:27:09.648037    8042 system_pods.go:89] "kube-apiserver-addons-889952" [f755434d-1e51-4af4-b035-25727e548ca4] Running
	I1127 23:27:09.648048    8042 system_pods.go:89] "kube-controller-manager-addons-889952" [fabd0a60-eb6e-48d0-bd33-82a3b316ea3f] Running
	I1127 23:27:09.648062    8042 system_pods.go:89] "kube-ingress-dns-minikube" [f25cb5f9-6368-410a-8d5a-cbade95ccf8e] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1127 23:27:09.648068    8042 system_pods.go:89] "kube-proxy-6vqwc" [1aaa5c60-99c2-4e73-9c47-56c26b7f14f3] Running
	I1127 23:27:09.648082    8042 system_pods.go:89] "kube-scheduler-addons-889952" [03d3ba5a-1222-4284-a654-7a2e150b26a0] Running
	I1127 23:27:09.648089    8042 system_pods.go:89] "metrics-server-7c66d45ddc-5lpw9" [21ff43a6-cc55-40ff-90c0-8b20cc3d2980] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 23:27:09.648097    8042 system_pods.go:89] "nvidia-device-plugin-daemonset-b7q6h" [39b12e34-7535-44fe-8b38-80c87e242bff] Running
	I1127 23:27:09.648102    8042 system_pods.go:89] "registry-5mgs4" [faf11a16-62ba-4bff-9bb0-cbe6f73bcf87] Running
	I1127 23:27:09.648110    8042 system_pods.go:89] "registry-proxy-d7k5l" [314907ab-51ee-48cd-ab69-01ae6063dd8d] Running
	I1127 23:27:09.648117    8042 system_pods.go:89] "snapshot-controller-58dbcc7b99-bxnbr" [026e27d5-c2b2-4766-8ff6-bc635016f586] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1127 23:27:09.648124    8042 system_pods.go:89] "snapshot-controller-58dbcc7b99-jp46j" [dcc59704-4e65-43c4-8e28-cf0fbf8ca01a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1127 23:27:09.648129    8042 system_pods.go:89] "storage-provisioner" [2f42e337-eb66-47ff-b702-333d29a2a2df] Running
	I1127 23:27:09.648136    8042 system_pods.go:126] duration metric: took 207.236995ms to wait for k8s-apps to be running ...
	I1127 23:27:09.648147    8042 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:27:09.648203    8042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:27:09.664866    8042 system_svc.go:56] duration metric: took 16.710589ms WaitForService to wait for kubelet.
	I1127 23:27:09.664889    8042 kubeadm.go:581] duration metric: took 31.93050527s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:27:09.664908    8042 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:27:09.842330    8042 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1127 23:27:09.842399    8042 node_conditions.go:123] node cpu capacity is 2
	I1127 23:27:09.842436    8042 node_conditions.go:105] duration metric: took 177.522537ms to run NodePressure ...
	I1127 23:27:09.842472    8042 start.go:228] waiting for startup goroutines ...
	I1127 23:27:09.937534    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:09.965440    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:10.011948    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:10.436182    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:10.466684    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:10.510177    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:10.936679    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:10.965026    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:11.009518    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:11.440629    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:11.465947    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:11.509449    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:11.945005    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:11.968407    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:12.014489    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:12.437777    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:12.473688    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:12.510813    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:12.939616    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:12.965536    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:13.009295    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:13.435508    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:13.465114    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:13.509559    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:13.936080    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:13.965110    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:14.009485    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:14.435629    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:14.465688    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:14.509484    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:14.936313    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:14.964395    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:15.009913    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:15.435506    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:15.464457    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:15.508792    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:15.936095    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:15.965572    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:16.008913    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:16.436971    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:16.466399    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:16.509645    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:16.936724    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:16.965138    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:17.009936    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:17.435847    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:17.466021    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:17.509894    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:17.935466    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:17.964914    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:18.009970    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:18.438256    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:18.465285    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:18.509446    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:18.936045    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:18.966195    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:19.009465    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:19.435394    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:19.464305    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:19.509646    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:19.935989    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:19.965446    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:20.008904    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:20.435312    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:20.468164    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:20.509781    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:20.936468    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:20.968532    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:21.009280    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:21.435515    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:21.464683    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:21.517917    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:21.935120    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:21.965625    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:22.009963    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:22.435095    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:22.465071    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:22.509261    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:22.935592    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:22.964458    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:23.009803    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:23.436132    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:23.466588    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:23.509876    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:23.935668    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:23.964487    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:24.008881    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:24.437371    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:24.464847    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:24.509007    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:24.935825    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:24.964895    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:25.008834    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:25.435783    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:25.466125    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:25.512224    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:25.936892    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:25.964698    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:26.009153    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:26.435741    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:26.464420    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:26.508819    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:26.935685    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:26.968392    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:27.008806    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:27.435465    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:27.464794    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:27.509330    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:27.935598    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:27.965152    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:28.010163    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:28.435422    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:28.464535    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:28.508919    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:28.936155    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:28.965285    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:29.009590    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:29.436378    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:29.464990    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:29.509613    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:29.935102    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:29.964824    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:30.009605    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:30.435053    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:30.465245    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:30.509250    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:30.936380    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:30.964560    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:31.009234    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:31.435739    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:31.464938    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:31.509216    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:31.935255    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1127 23:27:31.964558    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:32.012617    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:32.435137    8042 kapi.go:107] duration metric: took 44.520372734s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1127 23:27:32.465551    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:32.508902    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:32.964384    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:33.009277    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:33.466385    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:33.511599    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:33.965175    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:34.009479    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:34.464748    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:34.509092    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:34.965235    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:35.009688    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:35.470706    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:35.509574    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:35.965193    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:36.009732    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:36.464555    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:36.508905    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:36.964287    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:37.009476    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:37.464602    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:37.509531    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:37.965443    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:38.009089    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:38.465297    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:38.509567    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:38.965624    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:39.009142    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:39.465180    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:39.509622    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:39.964511    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:40.009151    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:40.465303    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:40.510128    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:40.964496    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:41.008905    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:41.464922    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:41.509144    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:41.965208    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:42.009588    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:42.465199    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:42.509344    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:42.964219    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:43.009436    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:43.464534    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:43.508947    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:43.964719    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:44.009121    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:44.464607    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:44.508940    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:44.964637    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:45.009145    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:45.468221    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:45.509961    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:45.964345    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:46.009490    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:46.466462    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:46.508943    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:46.964709    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:47.009023    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:47.466100    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:47.509647    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:47.964811    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:48.009444    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:48.465328    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:48.509799    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:48.965345    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:49.009445    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:49.464551    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:49.508663    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:49.964840    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:50.009098    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:50.465035    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:50.509635    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:50.964189    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:51.009518    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:51.465203    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:51.509495    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:51.964969    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:52.009455    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:52.466694    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:52.508991    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:52.964775    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:53.009877    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:53.467183    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:53.509339    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:53.965496    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:54.009223    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:54.466133    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:54.509684    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:54.965699    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:55.009653    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:55.464883    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:55.508885    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:55.964314    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:56.009765    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:56.464963    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:56.509309    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:56.964719    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:57.013111    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:57.466401    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:57.510989    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:57.971618    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:58.009280    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:58.473512    8042 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1127 23:27:58.511898    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:58.964960    8042 kapi.go:107] duration metric: took 1m12.527912964s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1127 23:27:59.010081    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:27:59.508525    8042 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1127 23:28:00.009787    8042 kapi.go:107] duration metric: took 1m10.519728987s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1127 23:28:00.011909    8042 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-889952 cluster.
	I1127 23:28:00.014015    8042 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1127 23:28:00.016442    8042 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1127 23:28:00.018550    8042 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner-rancher, inspektor-gadget, nvidia-device-plugin, storage-provisioner, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1127 23:28:00.020497    8042 addons.go:502] enable addons completed in 1m22.547807289s: enabled=[cloud-spanner ingress-dns storage-provisioner-rancher inspektor-gadget nvidia-device-plugin storage-provisioner metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1127 23:28:00.020544    8042 start.go:233] waiting for cluster config update ...
	I1127 23:28:00.020563    8042 start.go:242] writing updated cluster config ...
	I1127 23:28:00.020875    8042 ssh_runner.go:195] Run: rm -f paused
	I1127 23:28:00.358281    8042 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 23:28:00.360374    8042 out.go:177] * Done! kubectl is now configured to use "addons-889952" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Nov 27 23:28:45 addons-889952 dockerd[1097]: time="2023-11-27T23:28:45.467300834Z" level=info msg="ignoring event" container=5ab7d0cdbfceae06704d9682ec29c83d97aa8cdda11368595b7e7de53364df3d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:28:46 addons-889952 dockerd[1097]: time="2023-11-27T23:28:46.198583067Z" level=info msg="ignoring event" container=bc3e0c95cfb30553c87429a0645705a769a91f506f80d834219d655ffa1f8677 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:28:49 addons-889952 dockerd[1097]: time="2023-11-27T23:28:49.796615940Z" level=info msg="ignoring event" container=d0bcede1353b974e2b621ebe83cc4b53c9a9b4f8d4eaa07efa6fdda69bce8c5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:28:49 addons-889952 dockerd[1097]: time="2023-11-27T23:28:49.832952193Z" level=info msg="ignoring event" container=cf4277b068c03562c350f2379766d4d3eb9cda842f451b7e89b3c501c62f4e04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:28:49 addons-889952 dockerd[1097]: time="2023-11-27T23:28:49.935445920Z" level=info msg="ignoring event" container=17dbaf4654de7d7628b49b138e2a8d9c1192f3b6932f79e52d7dc1466cec8529 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:28:49 addons-889952 dockerd[1097]: time="2023-11-27T23:28:49.991257723Z" level=info msg="ignoring event" container=2f277f7b207851f74a0dd0e4afae634db90888f338d9b470dd9e235512beea59 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:28:55 addons-889952 dockerd[1097]: time="2023-11-27T23:28:55.423207541Z" level=info msg="ignoring event" container=a2125fd3556f4cf4e90dbefbc47651ec2f0cc429ac7f1de28a83657728f8e10a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:28:55 addons-889952 dockerd[1097]: time="2023-11-27T23:28:55.566632162Z" level=info msg="ignoring event" container=6ad7286b9a72de05258048ba555f34a6816b30b5eaae18302c658766cca7a4d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:28:56 addons-889952 cri-dockerd[1307]: time="2023-11-27T23:28:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/72abff446e4b072785ee33ca7a62ae01c11343ce270879ae9cd9bbbb978b29ec/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Nov 27 23:28:56 addons-889952 dockerd[1097]: time="2023-11-27T23:28:56.884174386Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 27 23:28:57 addons-889952 cri-dockerd[1307]: time="2023-11-27T23:28:57Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Nov 27 23:28:57 addons-889952 dockerd[1097]: time="2023-11-27T23:28:57.543402997Z" level=info msg="ignoring event" container=62a203820a14662025a8163e7a190328573999c68c26faa0a89c41981b9a8360 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:28:58 addons-889952 dockerd[1097]: time="2023-11-27T23:28:58.801150168Z" level=info msg="ignoring event" container=9ff6661f966e3ac7aeaddb0d2335abfb70a6c3491d488a784d008b3017976c91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:28:59 addons-889952 dockerd[1097]: time="2023-11-27T23:28:59.316860049Z" level=info msg="ignoring event" container=72abff446e4b072785ee33ca7a62ae01c11343ce270879ae9cd9bbbb978b29ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:29:01 addons-889952 cri-dockerd[1307]: time="2023-11-27T23:29:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2529c4e075f44f6d4e0db277c261ce727f217c9cbd1cf1592b28167200527801/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Nov 27 23:29:01 addons-889952 cri-dockerd[1307]: time="2023-11-27T23:29:01Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Nov 27 23:29:02 addons-889952 dockerd[1097]: time="2023-11-27T23:29:02.132043290Z" level=info msg="ignoring event" container=131b9965c1580077da48c6187892a76f1cd72ca45da2b3e390ababd34c6d2e89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:29:02 addons-889952 dockerd[1097]: time="2023-11-27T23:29:02.895397878Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=19d95a790a05fe85d3a3a1360c00e7cfd5356868b4f3b4634f0c4e02e0e9e8b7
	Nov 27 23:29:02 addons-889952 dockerd[1097]: time="2023-11-27T23:29:02.960982914Z" level=info msg="ignoring event" container=19d95a790a05fe85d3a3a1360c00e7cfd5356868b4f3b4634f0c4e02e0e9e8b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:29:03 addons-889952 dockerd[1097]: time="2023-11-27T23:29:03.076264717Z" level=info msg="ignoring event" container=cdf75d25ea8dd81db7b22f4427eafb806dcc3787b75630c20c463b94ffb11987 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:29:03 addons-889952 dockerd[1097]: time="2023-11-27T23:29:03.364939717Z" level=info msg="ignoring event" container=2529c4e075f44f6d4e0db277c261ce727f217c9cbd1cf1592b28167200527801 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:29:03 addons-889952 dockerd[1097]: time="2023-11-27T23:29:03.474238708Z" level=info msg="ignoring event" container=ea7d916178e21b8336d12b98daf528dddf5df979cff40c9a4c402121e5a32056 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:29:04 addons-889952 cri-dockerd[1307]: time="2023-11-27T23:29:04Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/feae84e35e87364bb885e1a7fe8cf72f29a92148b307e3c6f51ef1cf8649678a/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Nov 27 23:29:04 addons-889952 dockerd[1097]: time="2023-11-27T23:29:04.879074048Z" level=info msg="ignoring event" container=8c21a71b2c9c0968bf2459e1342c432554bc1512008ce903f6e1c54bbe01242e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:29:06 addons-889952 dockerd[1097]: time="2023-11-27T23:29:06.457900956Z" level=info msg="ignoring event" container=feae84e35e87364bb885e1a7fe8cf72f29a92148b307e3c6f51ef1cf8649678a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8c21a71b2c9c0       fc9db2894f4e4                                                                                                                4 seconds ago        Exited              helper-pod                0                   feae84e35e873       helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe
	ea7d916178e21       dd1b12fcb6097                                                                                                                5 seconds ago        Exited              hello-world-app           2                   c02939cea9030       hello-world-app-5d77478584-d8vpn
	62a203820a146       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              11 seconds ago       Exited              helper-pod                0                   72abff446e4b0       helper-pod-create-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe
	31740e85c746a       nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77                                                34 seconds ago       Running             nginx                     0                   96b6f62b551db       nginx
	214c777d8aca7       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                  0                   a50274e87f95c       gcp-auth-d4c87556c-74mlb
	9ad5868d7b0d2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   About a minute ago   Exited              patch                     0                   1d556f95af21a       ingress-nginx-admission-patch-4w96s
	6176fcd3c7a11       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   About a minute ago   Exited              create                    0                   4ce15ac1f41ff       ingress-nginx-admission-create-vxdkq
	49802ce2c9a57       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner    0                   6c53af1ea97dd       local-path-provisioner-78b46b4d5c-q8twf
	72ce101cf5faf       gcr.io/cloud-spanner-emulator/emulator@sha256:9ded3fac22d4d1c85ae51473e3876e2377f5179192fea664409db0fe87e05ece               2 minutes ago        Running             cloud-spanner-emulator    0                   4a50405dfb023       cloud-spanner-emulator-5649c69bf6-tf22h
	b343b2dd0dd8d       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner       0                   cc40abfa3af4a       storage-provisioner
	332cf32f586cb       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                   0                   867e00ae4b5df       coredns-5dd5756b68-gg8kv
	737cbae0b9f3f       3ca3ca488cf13                                                                                                                2 minutes ago        Running             kube-proxy                0                   910c7f831b036       kube-proxy-6vqwc
	3f1442fdd0bd9       9cdd6470f48c8                                                                                                                2 minutes ago        Running             etcd                      0                   d1675fb49e99c       etcd-addons-889952
	cc10a895fcbf8       04b4c447bb9d4                                                                                                                2 minutes ago        Running             kube-apiserver            0                   f05bd652f49af       kube-apiserver-addons-889952
	0e93814a13934       9961cbceaf234                                                                                                                2 minutes ago        Running             kube-controller-manager   0                   531a0de47387f       kube-controller-manager-addons-889952
	d3a785b1fb867       05c284c929889                                                                                                                2 minutes ago        Running             kube-scheduler            0                   1bd556ec5daeb       kube-scheduler-addons-889952
	
	* 
	* ==> coredns [332cf32f586c] <==
	* [INFO] 10.244.0.18:37838 - 6044 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000074109s
	[INFO] 10.244.0.18:37838 - 63481 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000081551s
	[INFO] 10.244.0.18:37838 - 38647 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067561s
	[INFO] 10.244.0.18:37838 - 43123 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052595s
	[INFO] 10.244.0.18:37838 - 59687 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00104821s
	[INFO] 10.244.0.18:37838 - 51688 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000854445s
	[INFO] 10.244.0.18:37838 - 61208 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075833s
	[INFO] 10.244.0.18:53688 - 65021 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000175492s
	[INFO] 10.244.0.18:51797 - 28782 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000154102s
	[INFO] 10.244.0.18:51797 - 17842 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059241s
	[INFO] 10.244.0.18:53688 - 38675 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061957s
	[INFO] 10.244.0.18:51797 - 52017 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000118097s
	[INFO] 10.244.0.18:53688 - 10208 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000084234s
	[INFO] 10.244.0.18:51797 - 54073 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056091s
	[INFO] 10.244.0.18:53688 - 5978 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065051s
	[INFO] 10.244.0.18:53688 - 26340 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055024s
	[INFO] 10.244.0.18:51797 - 52767 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046877s
	[INFO] 10.244.0.18:51797 - 20904 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060193s
	[INFO] 10.244.0.18:53688 - 60726 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059512s
	[INFO] 10.244.0.18:53688 - 25131 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001618674s
	[INFO] 10.244.0.18:51797 - 12266 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001118397s
	[INFO] 10.244.0.18:51797 - 27245 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00106732s
	[INFO] 10.244.0.18:53688 - 13592 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000978843s
	[INFO] 10.244.0.18:51797 - 9228 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00010272s
	[INFO] 10.244.0.18:53688 - 4381 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00008014s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-889952
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-889952
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=addons-889952
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_26_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-889952
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:26:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-889952
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:29:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:28:57 +0000   Mon, 27 Nov 2023 23:26:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:28:57 +0000   Mon, 27 Nov 2023 23:26:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:28:57 +0000   Mon, 27 Nov 2023 23:26:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:28:57 +0000   Mon, 27 Nov 2023 23:26:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-889952
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 171d1e3fcbef432d85f56047dc78c16a
	  System UUID:                70ec4fe5-659d-46ec-ae5a-66496bc8c26c
	  Boot ID:                    78fd6d56-9be1-4fcf-98b8-4f12948f7c56
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-tf22h    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m27s
	  default                     hello-world-app-5d77478584-d8vpn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-d4c87556c-74mlb                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 coredns-5dd5756b68-gg8kv                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m31s
	  kube-system                 etcd-addons-889952                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-apiserver-addons-889952               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-controller-manager-addons-889952      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-proxy-6vqwc                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-scheduler-addons-889952               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  local-path-storage          local-path-provisioner-78b46b4d5c-q8twf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (2%!)(MISSING)  170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m29s  kube-proxy       
	  Normal  Starting                 2m44s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m44s  kubelet          Node addons-889952 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s  kubelet          Node addons-889952 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m44s  kubelet          Node addons-889952 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m44s  kubelet          Node addons-889952 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m44s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m44s  kubelet          Node addons-889952 status is now: NodeReady
	  Normal  RegisteredNode           2m32s  node-controller  Node addons-889952 event: Registered Node addons-889952 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov27 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015059] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +1.192925] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.665397] kauditd_printk_skb: 26 callbacks suppressed
	
	* 
	* ==> etcd [3f1442fdd0bd] <==
	* {"level":"info","ts":"2023-11-27T23:26:17.874564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-11-27T23:26:17.874855Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-27T23:26:17.875064Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-11-27T23:26:17.875322Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-27T23:26:17.876039Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-27T23:26:17.875696Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-27T23:26:17.875718Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-27T23:26:18.143094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-27T23:26:18.14331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-27T23:26:18.143418Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-11-27T23:26:18.143526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-11-27T23:26:18.143623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-27T23:26:18.143721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-11-27T23:26:18.143861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-27T23:26:18.146468Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:26:18.151699Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-889952 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-27T23:26:18.152033Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T23:26:18.154077Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-27T23:26:18.152163Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:26:18.159645Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:26:18.159743Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-27T23:26:18.152204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-27T23:26:18.167355Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-27T23:26:18.152418Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-27T23:26:18.167756Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> gcp-auth [214c777d8aca] <==
	* 2023/11/27 23:27:59 GCP Auth Webhook started!
	2023/11/27 23:28:03 Ready to marshal response ...
	2023/11/27 23:28:03 Ready to write response ...
	2023/11/27 23:28:10 Ready to marshal response ...
	2023/11/27 23:28:10 Ready to write response ...
	2023/11/27 23:28:32 Ready to marshal response ...
	2023/11/27 23:28:32 Ready to write response ...
	2023/11/27 23:28:32 Ready to marshal response ...
	2023/11/27 23:28:32 Ready to write response ...
	2023/11/27 23:28:42 Ready to marshal response ...
	2023/11/27 23:28:42 Ready to write response ...
	2023/11/27 23:28:56 Ready to marshal response ...
	2023/11/27 23:28:56 Ready to write response ...
	2023/11/27 23:28:56 Ready to marshal response ...
	2023/11/27 23:28:56 Ready to write response ...
	2023/11/27 23:29:04 Ready to marshal response ...
	2023/11/27 23:29:04 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:29:08 up 11 min,  0 users,  load average: 1.83, 1.21, 0.52
	Linux addons-889952 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [cc10a895fcbf] <==
	* I1127 23:28:26.272441       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1127 23:28:26.283313       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1127 23:28:27.300205       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1127 23:28:32.453192       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1127 23:28:32.854096       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.9.25"}
	I1127 23:28:42.845110       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.107.152"}
	I1127 23:28:49.604673       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:49.604718       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:49.616523       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:49.616579       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:49.631033       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:49.631221       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:49.649232       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:49.650425       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:49.669754       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:49.669801       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:49.685912       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:49.685951       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1127 23:28:49.702210       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1127 23:28:49.702255       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1127 23:28:50.650748       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1127 23:28:50.688110       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1127 23:28:50.717358       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1127 23:28:59.935455       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E1127 23:29:00.139096       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [0e93814a1393] <==
	* W1127 23:28:52.005644       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:28:52.005678       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:28:53.647623       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:28:53.647652       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:28:54.147492       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:28:54.147524       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:28:54.979061       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:28:54.979091       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1127 23:28:55.991185       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I1127 23:28:56.226479       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1127 23:28:58.685310       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:28:58.685347       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1127 23:28:59.756457       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:28:59.756488       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1127 23:28:59.851314       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1127 23:28:59.855744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="5.062µs"
	I1127 23:28:59.865398       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1127 23:28:59.889673       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1127 23:28:59.889705       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1127 23:29:04.326931       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.507µs"
	I1127 23:29:04.756674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="7.82µs"
	I1127 23:29:07.303663       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1127 23:29:07.303697       1 shared_informer.go:318] Caches are synced for resource quota
	I1127 23:29:07.617815       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1127 23:29:07.619261       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [737cbae0b9f3] <==
	* I1127 23:26:38.840572       1 server_others.go:69] "Using iptables proxy"
	I1127 23:26:38.863123       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1127 23:26:38.932569       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1127 23:26:38.934709       1 server_others.go:152] "Using iptables Proxier"
	I1127 23:26:38.934734       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1127 23:26:38.934742       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1127 23:26:38.934813       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1127 23:26:38.935013       1 server.go:846] "Version info" version="v1.28.4"
	I1127 23:26:38.935023       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1127 23:26:38.935897       1 config.go:188] "Starting service config controller"
	I1127 23:26:38.935937       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1127 23:26:38.935955       1 config.go:97] "Starting endpoint slice config controller"
	I1127 23:26:38.935958       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1127 23:26:38.937279       1 config.go:315] "Starting node config controller"
	I1127 23:26:38.937290       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1127 23:26:39.036087       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1127 23:26:39.036150       1 shared_informer.go:318] Caches are synced for service config
	I1127 23:26:39.037878       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d3a785b1fb86] <==
	* W1127 23:26:21.834234       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1127 23:26:21.834346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 23:26:21.834373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1127 23:26:21.834449       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1127 23:26:21.834514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1127 23:26:21.834446       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1127 23:26:21.834600       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 23:26:21.834618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1127 23:26:21.834712       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1127 23:26:21.834800       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1127 23:26:21.834845       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1127 23:26:21.834970       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 23:26:21.834993       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 23:26:21.835043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1127 23:26:21.834765       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 23:26:21.835178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1127 23:26:21.835055       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 23:26:21.835325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1127 23:26:21.835213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1127 23:26:21.835464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1127 23:26:21.834893       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:26:21.835615       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1127 23:26:21.835119       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1127 23:26:21.835711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1127 23:26:22.824389       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 27 23:29:04 addons-889952 kubelet[2301]: I1127 23:29:04.084021    2301 memory_manager.go:346] "RemoveStaleState removing state" podUID="df9fd120-3c2d-4217-be0b-398e547e2424" containerName="busybox"
	Nov 27 23:29:04 addons-889952 kubelet[2301]: I1127 23:29:04.084032    2301 memory_manager.go:346] "RemoveStaleState removing state" podUID="f25cb5f9-6368-410a-8d5a-cbade95ccf8e" containerName="minikube-ingress-dns"
	Nov 27 23:29:04 addons-889952 kubelet[2301]: I1127 23:29:04.273835    2301 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-gcp-creds\") pod \"helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe\" (UID: \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\") " pod="local-path-storage/helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe"
	Nov 27 23:29:04 addons-889952 kubelet[2301]: I1127 23:29:04.273883    2301 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-script\") pod \"helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe\" (UID: \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\") " pod="local-path-storage/helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe"
	Nov 27 23:29:04 addons-889952 kubelet[2301]: I1127 23:29:04.273912    2301 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-data\") pod \"helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe\" (UID: \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\") " pod="local-path-storage/helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe"
	Nov 27 23:29:04 addons-889952 kubelet[2301]: I1127 23:29:04.273940    2301 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsw5n\" (UniqueName: \"kubernetes.io/projected/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-kube-api-access-lsw5n\") pod \"helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe\" (UID: \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\") " pod="local-path-storage/helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe"
	Nov 27 23:29:04 addons-889952 kubelet[2301]: I1127 23:29:04.293969    2301 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a9f6bdb4-7a90-4923-8972-ace2a8f003c5" path="/var/lib/kubelet/pods/a9f6bdb4-7a90-4923-8972-ace2a8f003c5/volumes"
	Nov 27 23:29:04 addons-889952 kubelet[2301]: I1127 23:29:04.294643    2301 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="df9fd120-3c2d-4217-be0b-398e547e2424" path="/var/lib/kubelet/pods/df9fd120-3c2d-4217-be0b-398e547e2424/volumes"
	Nov 27 23:29:04 addons-889952 kubelet[2301]: I1127 23:29:04.312042    2301 scope.go:117] "RemoveContainer" containerID="bc3e0c95cfb30553c87429a0645705a769a91f506f80d834219d655ffa1f8677"
	Nov 27 23:29:04 addons-889952 kubelet[2301]: I1127 23:29:04.312379    2301 scope.go:117] "RemoveContainer" containerID="ea7d916178e21b8336d12b98daf528dddf5df979cff40c9a4c402121e5a32056"
	Nov 27 23:29:04 addons-889952 kubelet[2301]: E1127 23:29:04.312626    2301 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-d8vpn_default(aca366bb-a966-4e8f-a6c8-f581b31b694e)\"" pod="default/hello-world-app-5d77478584-d8vpn" podUID="aca366bb-a966-4e8f-a6c8-f581b31b694e"
	Nov 27 23:29:04 addons-889952 kubelet[2301]: I1127 23:29:04.337155    2301 scope.go:117] "RemoveContainer" containerID="131b9965c1580077da48c6187892a76f1cd72ca45da2b3e390ababd34c6d2e89"
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.589526    2301 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-gcp-creds\") pod \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\" (UID: \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\") "
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.590019    2301 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsw5n\" (UniqueName: \"kubernetes.io/projected/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-kube-api-access-lsw5n\") pod \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\" (UID: \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\") "
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.590060    2301 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-data\") pod \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\" (UID: \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\") "
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.590090    2301 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-script\") pod \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\" (UID: \"7e38ecf4-1e59-4c62-9efb-a6970bb746ad\") "
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.590523    2301 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-script" (OuterVolumeSpecName: "script") pod "7e38ecf4-1e59-4c62-9efb-a6970bb746ad" (UID: "7e38ecf4-1e59-4c62-9efb-a6970bb746ad"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.590570    2301 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-data" (OuterVolumeSpecName: "data") pod "7e38ecf4-1e59-4c62-9efb-a6970bb746ad" (UID: "7e38ecf4-1e59-4c62-9efb-a6970bb746ad"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.589602    2301 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "7e38ecf4-1e59-4c62-9efb-a6970bb746ad" (UID: "7e38ecf4-1e59-4c62-9efb-a6970bb746ad"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.594265    2301 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-kube-api-access-lsw5n" (OuterVolumeSpecName: "kube-api-access-lsw5n") pod "7e38ecf4-1e59-4c62-9efb-a6970bb746ad" (UID: "7e38ecf4-1e59-4c62-9efb-a6970bb746ad"). InnerVolumeSpecName "kube-api-access-lsw5n". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.690927    2301 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-gcp-creds\") on node \"addons-889952\" DevicePath \"\""
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.690967    2301 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lsw5n\" (UniqueName: \"kubernetes.io/projected/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-kube-api-access-lsw5n\") on node \"addons-889952\" DevicePath \"\""
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.690981    2301 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-data\") on node \"addons-889952\" DevicePath \"\""
	Nov 27 23:29:06 addons-889952 kubelet[2301]: I1127 23:29:06.690992    2301 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/7e38ecf4-1e59-4c62-9efb-a6970bb746ad-script\") on node \"addons-889952\" DevicePath \"\""
	Nov 27 23:29:07 addons-889952 kubelet[2301]: I1127 23:29:07.393073    2301 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feae84e35e87364bb885e1a7fe8cf72f29a92148b307e3c6f51ef1cf8649678a"
	
	* 
	* ==> storage-provisioner [b343b2dd0dd8] <==
	* I1127 23:26:44.769780       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 23:26:44.799017       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 23:26:44.799105       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 23:26:44.811796       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 23:26:44.811939       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7c89fa3f-13d2-424b-a4ba-d787a1b2c7b3", APIVersion:"v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-889952_11934482-562e-4d95-b3ad-6c54d9074383 became leader
	I1127 23:26:44.812399       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-889952_11934482-562e-4d95-b3ad-6c54d9074383!
	I1127 23:26:44.913947       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-889952_11934482-562e-4d95-b3ad-6c54d9074383!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-889952 -n addons-889952
helpers_test.go:261: (dbg) Run:  kubectl --context addons-889952 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-889952 describe pod helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-889952 describe pod helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe: exit status 1 (86.608049ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-889952 describe pod helper-pod-delete-pvc-330b5ec0-97af-4e44-ab94-121d2102abbe: exit status 1
--- FAIL: TestAddons/parallel/Ingress (37.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (53.05s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-916543 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-916543 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.881136526s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-916543 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-916543 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5ed44244-df4d-4221-9e00-412c6c2bbad9] Pending
helpers_test.go:344: "nginx" [5ed44244-df4d-4221-9e00-412c6c2bbad9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5ed44244-df4d-4221-9e00-412c6c2bbad9] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.010144136s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-916543 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-916543 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-916543 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.009457546s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-916543 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-916543 addons disable ingress-dns --alsologtostderr -v=1: (5.338490871s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-916543 addons disable ingress --alsologtostderr -v=1
E1127 23:38:00.392136    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-916543 addons disable ingress --alsologtostderr -v=1: (7.518379461s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-916543
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-916543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49ea840722e006f72390b64b1b766bb55fdd45c9289625d197c4a8db420b746a",
	        "Created": "2023-11-27T23:36:05.419780218Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 55036,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-27T23:36:05.732836805Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/49ea840722e006f72390b64b1b766bb55fdd45c9289625d197c4a8db420b746a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49ea840722e006f72390b64b1b766bb55fdd45c9289625d197c4a8db420b746a/hostname",
	        "HostsPath": "/var/lib/docker/containers/49ea840722e006f72390b64b1b766bb55fdd45c9289625d197c4a8db420b746a/hosts",
	        "LogPath": "/var/lib/docker/containers/49ea840722e006f72390b64b1b766bb55fdd45c9289625d197c4a8db420b746a/49ea840722e006f72390b64b1b766bb55fdd45c9289625d197c4a8db420b746a-json.log",
	        "Name": "/ingress-addon-legacy-916543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-916543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-916543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f5aadd72ce3424af21aa448ec87ec81b9a3a57253da6aa9b9e1c94124794d3de-init/diff:/var/lib/docker/overlay2/150860364e38eff4c019c0f0be35917511a7a9583a74bcbfa76fc06522b5265f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f5aadd72ce3424af21aa448ec87ec81b9a3a57253da6aa9b9e1c94124794d3de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f5aadd72ce3424af21aa448ec87ec81b9a3a57253da6aa9b9e1c94124794d3de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f5aadd72ce3424af21aa448ec87ec81b9a3a57253da6aa9b9e1c94124794d3de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-916543",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-916543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-916543",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-916543",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-916543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "39e1fd4ce9bbd81adccbd2aa613b50d42464d8a1b8390f5b51809aae69ae05aa",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/39e1fd4ce9bb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-916543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "49ea840722e0",
	                        "ingress-addon-legacy-916543"
	                    ],
	                    "NetworkID": "4909ed198aa00a0a74e6cbd85dd557cbdf16d561b296daed22edfbd7fa940d79",
	                    "EndpointID": "00f3c3d5da341b6ced4529d1fe23dd9baa888b6e0700e3f8cedc2e140efc5ed4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-916543 -n ingress-addon-legacy-916543
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-916543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-916543 logs -n 25: (1.007449691s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-689033                     | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC |                     |
	|                | --kill=true                              |                             |         |         |                     |                     |
	| update-context | functional-689033                        | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-689033                        | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-689033                        | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-689033                        | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-689033                        | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:34 UTC |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-689033 ssh pgrep              | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-689033 image build -t         | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:34 UTC | 27 Nov 23 23:35 UTC |
	|                | localhost/my-image:functional-689033     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-689033 image ls               | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:35 UTC |
	| image          | functional-689033                        | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:35 UTC |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-689033                        | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:35 UTC |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-689033                     | functional-689033           | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:35 UTC |
	| start          | -p image-482214                          | image-482214                | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:35 UTC |
	|                | --driver=docker                          |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-482214                | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:35 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-482214                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-482214                | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:35 UTC |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-482214                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-482214                | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:35 UTC |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-482214                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-482214                | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:35 UTC |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-482214                          |                             |         |         |                     |                     |
	| delete         | -p image-482214                          | image-482214                | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:35 UTC |
	| start          | -p ingress-addon-legacy-916543           | ingress-addon-legacy-916543 | jenkins | v1.32.0 | 27 Nov 23 23:35 UTC | 27 Nov 23 23:36 UTC |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                     |                             |         |         |                     |                     |
	|                | --container-runtime=docker               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-916543              | ingress-addon-legacy-916543 | jenkins | v1.32.0 | 27 Nov 23 23:36 UTC | 27 Nov 23 23:37 UTC |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-916543              | ingress-addon-legacy-916543 | jenkins | v1.32.0 | 27 Nov 23 23:37 UTC | 27 Nov 23 23:37 UTC |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-916543              | ingress-addon-legacy-916543 | jenkins | v1.32.0 | 27 Nov 23 23:37 UTC | 27 Nov 23 23:37 UTC |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-916543 ip           | ingress-addon-legacy-916543 | jenkins | v1.32.0 | 27 Nov 23 23:37 UTC | 27 Nov 23 23:37 UTC |
	| addons         | ingress-addon-legacy-916543              | ingress-addon-legacy-916543 | jenkins | v1.32.0 | 27 Nov 23 23:37 UTC | 27 Nov 23 23:37 UTC |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-916543              | ingress-addon-legacy-916543 | jenkins | v1.32.0 | 27 Nov 23 23:37 UTC | 27 Nov 23 23:38 UTC |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:35:47
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:35:47.232234   54577 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:35:47.232375   54577 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:35:47.232384   54577 out.go:309] Setting ErrFile to fd 2...
	I1127 23:35:47.232391   54577 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:35:47.232643   54577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
	I1127 23:35:47.233059   54577 out.go:303] Setting JSON to false
	I1127 23:35:47.233858   54577 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1097,"bootTime":1701127051,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1127 23:35:47.233926   54577 start.go:138] virtualization:  
	I1127 23:35:47.237102   54577 out.go:177] * [ingress-addon-legacy-916543] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:35:47.239784   54577 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:35:47.242048   54577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:35:47.239899   54577 notify.go:220] Checking for updates...
	I1127 23:35:47.247258   54577 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	I1127 23:35:47.249527   54577 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	I1127 23:35:47.251450   54577 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1127 23:35:47.253703   54577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:35:47.256022   54577 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:35:47.281323   54577 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:35:47.281445   54577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:35:47.375182   54577 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-27 23:35:47.365922206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:35:47.375282   54577 docker.go:295] overlay module found
	I1127 23:35:47.377713   54577 out.go:177] * Using the docker driver based on user configuration
	I1127 23:35:47.379840   54577 start.go:298] selected driver: docker
	I1127 23:35:47.379854   54577 start.go:902] validating driver "docker" against <nil>
	I1127 23:35:47.379872   54577 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:35:47.380459   54577 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:35:47.449567   54577 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-27 23:35:47.440298255 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:35:47.449732   54577 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:35:47.449961   54577 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 23:35:47.452107   54577 out.go:177] * Using Docker driver with root privileges
	I1127 23:35:47.454046   54577 cni.go:84] Creating CNI manager for ""
	I1127 23:35:47.454068   54577 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1127 23:35:47.454080   54577 start_flags.go:323] config:
	{Name:ingress-addon-legacy-916543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-916543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:35:47.457828   54577 out.go:177] * Starting control plane node ingress-addon-legacy-916543 in cluster ingress-addon-legacy-916543
	I1127 23:35:47.459701   54577 cache.go:121] Beginning downloading kic base image for docker with docker
	I1127 23:35:47.461782   54577 out.go:177] * Pulling base image ...
	I1127 23:35:47.463758   54577 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1127 23:35:47.463841   54577 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:35:47.480512   54577 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1127 23:35:47.480533   54577 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1127 23:35:47.537561   54577 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1127 23:35:47.537587   54577 cache.go:56] Caching tarball of preloaded images
	I1127 23:35:47.537740   54577 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1127 23:35:47.540086   54577 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1127 23:35:47.542143   54577 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1127 23:35:47.814082   54577 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1127 23:35:58.215905   54577 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1127 23:35:58.216006   54577 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1127 23:35:59.314582   54577 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1127 23:35:59.314960   54577 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/config.json ...
	I1127 23:35:59.314992   54577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/config.json: {Name:mk0989e7285c7e11c125cb8a39683d77d699ab2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:35:59.315183   54577 cache.go:194] Successfully downloaded all kic artifacts
	I1127 23:35:59.315222   54577 start.go:365] acquiring machines lock for ingress-addon-legacy-916543: {Name:mk0fed78f2633aee7c0df1e8f2e10aa90a50c0e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 23:35:59.315286   54577 start.go:369] acquired machines lock for "ingress-addon-legacy-916543" in 47.279µs
	I1127 23:35:59.315309   54577 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-916543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-916543 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1127 23:35:59.315378   54577 start.go:125] createHost starting for "" (driver="docker")
	I1127 23:35:59.318016   54577 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1127 23:35:59.318219   54577 start.go:159] libmachine.API.Create for "ingress-addon-legacy-916543" (driver="docker")
	I1127 23:35:59.318239   54577 client.go:168] LocalClient.Create starting
	I1127 23:35:59.318352   54577 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem
	I1127 23:35:59.318389   54577 main.go:141] libmachine: Decoding PEM data...
	I1127 23:35:59.318408   54577 main.go:141] libmachine: Parsing certificate...
	I1127 23:35:59.318463   54577 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem
	I1127 23:35:59.318484   54577 main.go:141] libmachine: Decoding PEM data...
	I1127 23:35:59.318499   54577 main.go:141] libmachine: Parsing certificate...
	I1127 23:35:59.318868   54577 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-916543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1127 23:35:59.336125   54577 cli_runner.go:211] docker network inspect ingress-addon-legacy-916543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1127 23:35:59.336195   54577 network_create.go:281] running [docker network inspect ingress-addon-legacy-916543] to gather additional debugging logs...
	I1127 23:35:59.336210   54577 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-916543
	W1127 23:35:59.351402   54577 cli_runner.go:211] docker network inspect ingress-addon-legacy-916543 returned with exit code 1
	I1127 23:35:59.351427   54577 network_create.go:284] error running [docker network inspect ingress-addon-legacy-916543]: docker network inspect ingress-addon-legacy-916543: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-916543 not found
	I1127 23:35:59.351439   54577 network_create.go:286] output of [docker network inspect ingress-addon-legacy-916543]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-916543 not found
	
	** /stderr **
	I1127 23:35:59.351525   54577 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:35:59.367334   54577 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000532620}
	I1127 23:35:59.367371   54577 network_create.go:124] attempt to create docker network ingress-addon-legacy-916543 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1127 23:35:59.367428   54577 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-916543 ingress-addon-legacy-916543
	I1127 23:35:59.436024   54577 network_create.go:108] docker network ingress-addon-legacy-916543 192.168.49.0/24 created
	I1127 23:35:59.436052   54577 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-916543" container
	I1127 23:35:59.436119   54577 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1127 23:35:59.451782   54577 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-916543 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-916543 --label created_by.minikube.sigs.k8s.io=true
	I1127 23:35:59.469439   54577 oci.go:103] Successfully created a docker volume ingress-addon-legacy-916543
	I1127 23:35:59.469517   54577 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-916543-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-916543 --entrypoint /usr/bin/test -v ingress-addon-legacy-916543:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib
	I1127 23:36:00.806222   54577 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-916543-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-916543 --entrypoint /usr/bin/test -v ingress-addon-legacy-916543:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -d /var/lib: (1.336670682s)
	I1127 23:36:00.806254   54577 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-916543
	I1127 23:36:00.806273   54577 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1127 23:36:00.806292   54577 kic.go:194] Starting extracting preloaded images to volume ...
	I1127 23:36:00.806410   54577 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-916543:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir
	I1127 23:36:05.342078   54577 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-916543:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 -I lz4 -xf /preloaded.tar -C /extractDir: (4.53560526s)
	I1127 23:36:05.342112   54577 kic.go:203] duration metric: took 4.535817 seconds to extract preloaded images to volume
	W1127 23:36:05.342243   54577 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1127 23:36:05.342387   54577 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1127 23:36:05.405012   54577 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-916543 --name ingress-addon-legacy-916543 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-916543 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-916543 --network ingress-addon-legacy-916543 --ip 192.168.49.2 --volume ingress-addon-legacy-916543:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1127 23:36:05.741139   54577 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-916543 --format={{.State.Running}}
	I1127 23:36:05.770914   54577 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-916543 --format={{.State.Status}}
	I1127 23:36:05.797627   54577 cli_runner.go:164] Run: docker exec ingress-addon-legacy-916543 stat /var/lib/dpkg/alternatives/iptables
	I1127 23:36:05.884697   54577 oci.go:144] the created container "ingress-addon-legacy-916543" has a running status.
	I1127 23:36:05.884722   54577 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17206-2172/.minikube/machines/ingress-addon-legacy-916543/id_rsa...
	I1127 23:36:06.333042   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/machines/ingress-addon-legacy-916543/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1127 23:36:06.333109   54577 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17206-2172/.minikube/machines/ingress-addon-legacy-916543/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1127 23:36:06.360912   54577 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-916543 --format={{.State.Status}}
	I1127 23:36:06.384524   54577 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1127 23:36:06.384547   54577 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-916543 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1127 23:36:06.475363   54577 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-916543 --format={{.State.Status}}
	I1127 23:36:06.497655   54577 machine.go:88] provisioning docker machine ...
	I1127 23:36:06.497688   54577 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-916543"
	I1127 23:36:06.497750   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:06.537077   54577 main.go:141] libmachine: Using SSH client type: native
	I1127 23:36:06.537512   54577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1127 23:36:06.537532   54577 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-916543 && echo "ingress-addon-legacy-916543" | sudo tee /etc/hostname
	I1127 23:36:06.721156   54577 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-916543
	
	I1127 23:36:06.721282   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:06.741547   54577 main.go:141] libmachine: Using SSH client type: native
	I1127 23:36:06.741927   54577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1127 23:36:06.741950   54577 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-916543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-916543/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-916543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 23:36:06.875246   54577 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 23:36:06.875277   54577 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-2172/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-2172/.minikube}
	I1127 23:36:06.875303   54577 ubuntu.go:177] setting up certificates
	I1127 23:36:06.875313   54577 provision.go:83] configureAuth start
	I1127 23:36:06.875377   54577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-916543
	I1127 23:36:06.909267   54577 provision.go:138] copyHostCerts
	I1127 23:36:06.909307   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17206-2172/.minikube/cert.pem
	I1127 23:36:06.909335   54577 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-2172/.minikube/cert.pem, removing ...
	I1127 23:36:06.909345   54577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-2172/.minikube/cert.pem
	I1127 23:36:06.909421   54577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-2172/.minikube/cert.pem (1123 bytes)
	I1127 23:36:06.909495   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17206-2172/.minikube/key.pem
	I1127 23:36:06.909514   54577 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-2172/.minikube/key.pem, removing ...
	I1127 23:36:06.909519   54577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-2172/.minikube/key.pem
	I1127 23:36:06.909548   54577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-2172/.minikube/key.pem (1679 bytes)
	I1127 23:36:06.909596   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17206-2172/.minikube/ca.pem
	I1127 23:36:06.909620   54577 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-2172/.minikube/ca.pem, removing ...
	I1127 23:36:06.909628   54577 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-2172/.minikube/ca.pem
	I1127 23:36:06.909654   54577 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-2172/.minikube/ca.pem (1078 bytes)
	I1127 23:36:06.909707   54577 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-2172/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-916543 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-916543]
	I1127 23:36:07.647625   54577 provision.go:172] copyRemoteCerts
	I1127 23:36:07.647691   54577 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 23:36:07.647738   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:07.664124   54577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/ingress-addon-legacy-916543/id_rsa Username:docker}
	I1127 23:36:07.756248   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1127 23:36:07.756303   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 23:36:07.782608   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1127 23:36:07.782667   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1127 23:36:07.807728   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1127 23:36:07.807826   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 23:36:07.832672   54577 provision.go:86] duration metric: configureAuth took 957.344135ms
	I1127 23:36:07.832702   54577 ubuntu.go:193] setting minikube options for container-runtime
	I1127 23:36:07.832889   54577 config.go:182] Loaded profile config "ingress-addon-legacy-916543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1127 23:36:07.832947   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:07.849815   54577 main.go:141] libmachine: Using SSH client type: native
	I1127 23:36:07.850221   54577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1127 23:36:07.850238   54577 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1127 23:36:07.975532   54577 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1127 23:36:07.975550   54577 ubuntu.go:71] root file system type: overlay
	I1127 23:36:07.975679   54577 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1127 23:36:07.975749   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:07.993256   54577 main.go:141] libmachine: Using SSH client type: native
	I1127 23:36:07.993703   54577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1127 23:36:07.993786   54577 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1127 23:36:08.131403   54577 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1127 23:36:08.131487   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:08.149940   54577 main.go:141] libmachine: Using SSH client type: native
	I1127 23:36:08.150371   54577 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I1127 23:36:08.150398   54577 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1127 23:36:08.915959   54577 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-11-27 23:36:08.125415121 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1127 23:36:08.915986   54577 machine.go:91] provisioned docker machine in 2.418308603s
	I1127 23:36:08.915996   54577 client.go:171] LocalClient.Create took 9.597751619s
	I1127 23:36:08.916008   54577 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-916543" took 9.597788435s
	I1127 23:36:08.916017   54577 start.go:300] post-start starting for "ingress-addon-legacy-916543" (driver="docker")
	I1127 23:36:08.916027   54577 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 23:36:08.916092   54577 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 23:36:08.916136   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:08.937922   54577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/ingress-addon-legacy-916543/id_rsa Username:docker}
	I1127 23:36:09.033194   54577 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 23:36:09.037153   54577 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1127 23:36:09.037190   54577 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1127 23:36:09.037201   54577 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1127 23:36:09.037209   54577 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1127 23:36:09.037222   54577 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-2172/.minikube/addons for local assets ...
	I1127 23:36:09.037276   54577 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-2172/.minikube/files for local assets ...
	I1127 23:36:09.037355   54577 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/74602.pem -> 74602.pem in /etc/ssl/certs
	I1127 23:36:09.037365   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/74602.pem -> /etc/ssl/certs/74602.pem
	I1127 23:36:09.037467   54577 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 23:36:09.047428   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/74602.pem --> /etc/ssl/certs/74602.pem (1708 bytes)
	I1127 23:36:09.074936   54577 start.go:303] post-start completed in 158.903002ms
	I1127 23:36:09.075309   54577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-916543
	I1127 23:36:09.091953   54577 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/config.json ...
	I1127 23:36:09.092212   54577 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:36:09.092259   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:09.108791   54577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/ingress-addon-legacy-916543/id_rsa Username:docker}
	I1127 23:36:09.196100   54577 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1127 23:36:09.201473   54577 start.go:128] duration metric: createHost completed in 9.886081468s
	I1127 23:36:09.201496   54577 start.go:83] releasing machines lock for "ingress-addon-legacy-916543", held for 9.886197712s
	I1127 23:36:09.201563   54577 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-916543
	I1127 23:36:09.218177   54577 ssh_runner.go:195] Run: cat /version.json
	I1127 23:36:09.218199   54577 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 23:36:09.218226   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:09.218252   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:09.237030   54577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/ingress-addon-legacy-916543/id_rsa Username:docker}
	I1127 23:36:09.246577   54577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/ingress-addon-legacy-916543/id_rsa Username:docker}
	I1127 23:36:09.326399   54577 ssh_runner.go:195] Run: systemctl --version
	I1127 23:36:09.465907   54577 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1127 23:36:09.471256   54577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1127 23:36:09.499694   54577 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1127 23:36:09.499801   54577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1127 23:36:09.519445   54577 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1127 23:36:09.538379   54577 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 23:36:09.538406   54577 start.go:472] detecting cgroup driver to use...
	I1127 23:36:09.538446   54577 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:36:09.538548   54577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:36:09.557429   54577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1127 23:36:09.568550   54577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1127 23:36:09.579806   54577 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1127 23:36:09.579874   54577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1127 23:36:09.591008   54577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1127 23:36:09.601781   54577 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1127 23:36:09.612552   54577 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1127 23:36:09.623143   54577 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 23:36:09.633414   54577 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1127 23:36:09.644386   54577 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 23:36:09.654104   54577 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 23:36:09.663712   54577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:36:09.755237   54577 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1127 23:36:09.876653   54577 start.go:472] detecting cgroup driver to use...
	I1127 23:36:09.876703   54577 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1127 23:36:09.876754   54577 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1127 23:36:09.895279   54577 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1127 23:36:09.895348   54577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1127 23:36:09.911480   54577 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 23:36:09.932163   54577 ssh_runner.go:195] Run: which cri-dockerd
	I1127 23:36:09.936789   54577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1127 23:36:09.946642   54577 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1127 23:36:09.970091   54577 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1127 23:36:10.082100   54577 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1127 23:36:10.187143   54577 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1127 23:36:10.187271   54577 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1127 23:36:10.208626   54577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:36:10.301083   54577 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1127 23:36:10.557067   54577 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1127 23:36:10.582108   54577 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1127 23:36:10.612596   54577 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I1127 23:36:10.612713   54577 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-916543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1127 23:36:10.629014   54577 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1127 23:36:10.633454   54577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:36:10.646097   54577 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1127 23:36:10.646163   54577 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1127 23:36:10.665877   54577 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1127 23:36:10.665896   54577 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1127 23:36:10.665945   54577 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1127 23:36:10.675688   54577 ssh_runner.go:195] Run: which lz4
	I1127 23:36:10.679868   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1127 23:36:10.679963   54577 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1127 23:36:10.683976   54577 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1127 23:36:10.684006   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I1127 23:36:12.707729   54577 docker.go:635] Took 2.027783 seconds to copy over tarball
	I1127 23:36:12.707835   54577 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1127 23:36:15.045391   54577 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.337516893s)
	I1127 23:36:15.045460   54577 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1127 23:36:15.166783   54577 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1127 23:36:15.177494   54577 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1127 23:36:15.197261   54577 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 23:36:15.287926   54577 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1127 23:36:17.533640   54577 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.245659279s)
	I1127 23:36:17.533725   54577 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1127 23:36:17.554492   54577 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1127 23:36:17.554515   54577 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1127 23:36:17.554525   54577 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1127 23:36:17.557285   54577 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:36:17.557456   54577 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1127 23:36:17.557676   54577 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:36:17.557753   54577 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:36:17.557823   54577 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:36:17.557895   54577 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:36:17.557956   54577 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:36:17.558015   54577 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1127 23:36:17.559850   54577 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:36:17.560212   54577 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:36:17.560435   54577 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1127 23:36:17.560574   54577 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:36:17.560696   54577 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:36:17.560810   54577 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:36:17.560919   54577 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1127 23:36:17.561167   54577 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	W1127 23:36:17.941820   54577 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1127 23:36:17.942048   54577 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1127 23:36:17.962380   54577 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1127 23:36:17.962422   54577 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I1127 23:36:17.962469   54577 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	W1127 23:36:17.967572   54577 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1127 23:36:17.967747   54577 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1127 23:36:17.983073   54577 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1127 23:36:17.983326   54577 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1127 23:36:17.984157   54577 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1127 23:36:17.984316   54577 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1127 23:36:17.990623   54577 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-2172/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1127 23:36:17.997455   54577 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1127 23:36:17.997631   54577 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1127 23:36:17.997770   54577 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1127 23:36:18.003698   54577 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1127 23:36:18.003859   54577 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:36:18.010644   54577 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1127 23:36:18.010683   54577 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:36:18.010730   54577 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1127 23:36:18.027970   54577 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1127 23:36:18.028011   54577 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:36:18.028054   54577 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1127 23:36:18.053155   54577 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1127 23:36:18.053201   54577 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1127 23:36:18.053253   54577 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	W1127 23:36:18.061946   54577 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1127 23:36:18.062091   54577 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:36:18.065000   54577 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1127 23:36:18.065044   54577 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I1127 23:36:18.065092   54577 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1127 23:36:18.112782   54577 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1127 23:36:18.112825   54577 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:36:18.112871   54577 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1127 23:36:18.118759   54577 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1127 23:36:18.118800   54577 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:36:18.118853   54577 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1127 23:36:18.118919   54577 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-2172/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1127 23:36:18.118956   54577 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-2172/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1127 23:36:18.152312   54577 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-2172/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1127 23:36:18.155610   54577 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-2172/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1127 23:36:18.155756   54577 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1127 23:36:18.155798   54577 docker.go:323] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:36:18.155841   54577 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:36:18.168696   54577 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-2172/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1127 23:36:18.168998   54577 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-2172/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1127 23:36:18.187509   54577 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17206-2172/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1127 23:36:18.187574   54577 cache_images.go:92] LoadImages completed in 633.037433ms
	W1127 23:36:18.187652   54577 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17206-2172/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I1127 23:36:18.187706   54577 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1127 23:36:18.245906   54577 cni.go:84] Creating CNI manager for ""
	I1127 23:36:18.245932   54577 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1127 23:36:18.245953   54577 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1127 23:36:18.245972   54577 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-916543 NodeName:ingress-addon-legacy-916543 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1127 23:36:18.246111   54577 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-916543"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 23:36:18.246182   54577 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-916543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-916543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 23:36:18.246243   54577 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1127 23:36:18.256239   54577 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 23:36:18.256306   54577 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 23:36:18.265908   54577 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1127 23:36:18.285166   54577 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1127 23:36:18.304470   54577 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1127 23:36:18.323978   54577 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1127 23:36:18.328062   54577 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 23:36:18.339951   54577 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543 for IP: 192.168.49.2
	I1127 23:36:18.339982   54577 certs.go:190] acquiring lock for shared ca certs: {Name:mkf476800f388ef5f0e09831530252d4aaf23bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:36:18.340144   54577 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-2172/.minikube/ca.key
	I1127 23:36:18.340196   54577 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.key
	I1127 23:36:18.340248   54577 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.key
	I1127 23:36:18.340263   54577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt with IP's: []
	I1127 23:36:19.293745   54577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt ...
	I1127 23:36:19.293774   54577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: {Name:mk49a4897bd59ec85ea6f0855e7499f144d94cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:36:19.293982   54577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.key ...
	I1127 23:36:19.293996   54577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.key: {Name:mk0bd26366e06dd8bb5931d4e2979a5249dbf287 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:36:19.294081   54577 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.key.dd3b5fb2
	I1127 23:36:19.294099   54577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1127 23:36:19.694747   54577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.crt.dd3b5fb2 ...
	I1127 23:36:19.694775   54577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.crt.dd3b5fb2: {Name:mk8e645595e40b1193085fae47221a52e2531c30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:36:19.694939   54577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.key.dd3b5fb2 ...
	I1127 23:36:19.694957   54577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.key.dd3b5fb2: {Name:mkc3a28e171782552fc207d97cd1598d8ea1da3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:36:19.695031   54577 certs.go:337] copying /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.crt
	I1127 23:36:19.695106   54577 certs.go:341] copying /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.key
	I1127 23:36:19.695169   54577 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/proxy-client.key
	I1127 23:36:19.695187   54577 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/proxy-client.crt with IP's: []
	I1127 23:36:20.005055   54577 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/proxy-client.crt ...
	I1127 23:36:20.005085   54577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/proxy-client.crt: {Name:mkabc0eef269490615252340ad08a86dc540804d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:36:20.005258   54577 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/proxy-client.key ...
	I1127 23:36:20.005279   54577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/proxy-client.key: {Name:mk2af2d9c860f32c85314a982b46ac9dffe92268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:36:20.005356   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1127 23:36:20.005377   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1127 23:36:20.005390   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1127 23:36:20.005415   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1127 23:36:20.005429   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1127 23:36:20.005446   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1127 23:36:20.005461   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1127 23:36:20.005472   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1127 23:36:20.005528   54577 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/7460.pem (1338 bytes)
	W1127 23:36:20.005581   54577 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/7460_empty.pem, impossibly tiny 0 bytes
	I1127 23:36:20.005595   54577 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 23:36:20.005625   54577 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem (1078 bytes)
	I1127 23:36:20.005656   54577 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem (1123 bytes)
	I1127 23:36:20.005683   54577 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/key.pem (1679 bytes)
	I1127 23:36:20.005731   54577 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/74602.pem (1708 bytes)
	I1127 23:36:20.005769   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/7460.pem -> /usr/share/ca-certificates/7460.pem
	I1127 23:36:20.005793   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/74602.pem -> /usr/share/ca-certificates/74602.pem
	I1127 23:36:20.005812   54577 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:36:20.006397   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 23:36:20.033660   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1127 23:36:20.060009   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 23:36:20.087299   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1127 23:36:20.113646   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 23:36:20.138981   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1127 23:36:20.165900   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 23:36:20.190830   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1127 23:36:20.215937   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/certs/7460.pem --> /usr/share/ca-certificates/7460.pem (1338 bytes)
	I1127 23:36:20.242053   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/74602.pem --> /usr/share/ca-certificates/74602.pem (1708 bytes)
	I1127 23:36:20.267695   54577 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 23:36:20.292855   54577 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 23:36:20.312000   54577 ssh_runner.go:195] Run: openssl version
	I1127 23:36:20.318787   54577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7460.pem && ln -fs /usr/share/ca-certificates/7460.pem /etc/ssl/certs/7460.pem"
	I1127 23:36:20.329412   54577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7460.pem
	I1127 23:36:20.333479   54577 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:30 /usr/share/ca-certificates/7460.pem
	I1127 23:36:20.333551   54577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7460.pem
	I1127 23:36:20.342249   54577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7460.pem /etc/ssl/certs/51391683.0"
	I1127 23:36:20.352856   54577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/74602.pem && ln -fs /usr/share/ca-certificates/74602.pem /etc/ssl/certs/74602.pem"
	I1127 23:36:20.363419   54577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/74602.pem
	I1127 23:36:20.367481   54577 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:30 /usr/share/ca-certificates/74602.pem
	I1127 23:36:20.367540   54577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/74602.pem
	I1127 23:36:20.375456   54577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/74602.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 23:36:20.385803   54577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 23:36:20.396817   54577 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:36:20.401082   54577 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:36:20.401160   54577 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 23:36:20.409304   54577 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 23:36:20.420562   54577 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 23:36:20.424651   54577 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1127 23:36:20.424699   54577 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-916543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-916543 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:36:20.424815   54577 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1127 23:36:20.443255   54577 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 23:36:20.453424   54577 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 23:36:20.463426   54577 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1127 23:36:20.463537   54577 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 23:36:20.473576   54577 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 23:36:20.473620   54577 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1127 23:36:20.526743   54577 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1127 23:36:20.527199   54577 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 23:36:20.735006   54577 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1127 23:36:20.735122   54577 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1050-aws
	I1127 23:36:20.735195   54577 kubeadm.go:322] DOCKER_VERSION: 24.0.7
	I1127 23:36:20.735258   54577 kubeadm.go:322] OS: Linux
	I1127 23:36:20.735332   54577 kubeadm.go:322] CGROUPS_CPU: enabled
	I1127 23:36:20.735403   54577 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1127 23:36:20.735477   54577 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1127 23:36:20.735547   54577 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1127 23:36:20.735634   54577 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1127 23:36:20.735704   54577 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1127 23:36:20.821274   54577 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 23:36:20.821447   54577 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 23:36:20.821573   54577 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 23:36:21.017854   54577 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 23:36:21.019206   54577 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 23:36:21.019532   54577 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 23:36:21.120094   54577 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 23:36:21.124717   54577 out.go:204]   - Generating certificates and keys ...
	I1127 23:36:21.124927   54577 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 23:36:21.125036   54577 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 23:36:21.532995   54577 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1127 23:36:21.951036   54577 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1127 23:36:22.227170   54577 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1127 23:36:22.525279   54577 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1127 23:36:22.881885   54577 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1127 23:36:22.882262   54577 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-916543 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:36:23.158680   54577 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1127 23:36:23.159071   54577 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-916543 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1127 23:36:23.846546   54577 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1127 23:36:24.388860   54577 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1127 23:36:24.573061   54577 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1127 23:36:24.573398   54577 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 23:36:25.196302   54577 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 23:36:25.837428   54577 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 23:36:26.240477   54577 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 23:36:27.087382   54577 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 23:36:27.088225   54577 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 23:36:27.090836   54577 out.go:204]   - Booting up control plane ...
	I1127 23:36:27.090952   54577 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 23:36:27.096854   54577 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 23:36:27.098893   54577 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 23:36:27.100602   54577 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 23:36:27.105858   54577 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 23:36:38.608755   54577 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502527 seconds
	I1127 23:36:38.609053   54577 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 23:36:38.623562   54577 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 23:36:39.142084   54577 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 23:36:39.142242   54577 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-916543 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1127 23:36:39.649575   54577 kubeadm.go:322] [bootstrap-token] Using token: pw447o.qrvx7e9juzzw5j15
	I1127 23:36:39.651558   54577 out.go:204]   - Configuring RBAC rules ...
	I1127 23:36:39.651672   54577 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 23:36:39.656781   54577 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 23:36:39.664928   54577 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 23:36:39.667700   54577 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 23:36:39.670118   54577 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 23:36:39.672777   54577 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 23:36:39.687498   54577 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 23:36:40.019317   54577 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 23:36:40.091320   54577 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 23:36:40.092672   54577 kubeadm.go:322] 
	I1127 23:36:40.092748   54577 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 23:36:40.092759   54577 kubeadm.go:322] 
	I1127 23:36:40.092833   54577 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 23:36:40.092842   54577 kubeadm.go:322] 
	I1127 23:36:40.092867   54577 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 23:36:40.092927   54577 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 23:36:40.092978   54577 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 23:36:40.092985   54577 kubeadm.go:322] 
	I1127 23:36:40.093035   54577 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 23:36:40.093119   54577 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 23:36:40.093199   54577 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 23:36:40.093208   54577 kubeadm.go:322] 
	I1127 23:36:40.093287   54577 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 23:36:40.093365   54577 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 23:36:40.093373   54577 kubeadm.go:322] 
	I1127 23:36:40.093452   54577 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token pw447o.qrvx7e9juzzw5j15 \
	I1127 23:36:40.093556   54577 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f4690ca77095961f6bb42e5114ae321e899e29e7a594db1af8b49ab63220abf \
	I1127 23:36:40.093579   54577 kubeadm.go:322]     --control-plane 
	I1127 23:36:40.093584   54577 kubeadm.go:322] 
	I1127 23:36:40.093663   54577 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 23:36:40.093670   54577 kubeadm.go:322] 
	I1127 23:36:40.093747   54577 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token pw447o.qrvx7e9juzzw5j15 \
	I1127 23:36:40.093845   54577 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:0f4690ca77095961f6bb42e5114ae321e899e29e7a594db1af8b49ab63220abf 
	I1127 23:36:40.097726   54577 kubeadm.go:322] W1127 23:36:20.525869    1663 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1127 23:36:40.097908   54577 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1127 23:36:40.098035   54577 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1127 23:36:40.098236   54577 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1050-aws\n", err: exit status 1
	I1127 23:36:40.098349   54577 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 23:36:40.098488   54577 kubeadm.go:322] W1127 23:36:27.096779    1663 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1127 23:36:40.098611   54577 kubeadm.go:322] W1127 23:36:27.098838    1663 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1127 23:36:40.098628   54577 cni.go:84] Creating CNI manager for ""
	I1127 23:36:40.098643   54577 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1127 23:36:40.098663   54577 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 23:36:40.098783   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:40.098858   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45 minikube.k8s.io/name=ingress-addon-legacy-916543 minikube.k8s.io/updated_at=2023_11_27T23_36_40_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:40.117945   54577 ops.go:34] apiserver oom_adj: -16
	I1127 23:36:40.693327   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:40.781944   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:41.369288   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:41.869633   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:42.369736   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:42.869230   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:43.369017   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:43.868876   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:44.368780   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:44.868910   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:45.368779   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:45.868761   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:46.369563   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:46.869641   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:47.369020   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:47.869710   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:48.369693   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:48.869323   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:49.369417   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:49.869545   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:50.369250   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:50.868782   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:51.369469   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:51.868914   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:52.368771   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:52.868842   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:53.369612   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:53.869649   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:54.368772   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:54.869421   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:55.369454   54577 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 23:36:55.515148   54577 kubeadm.go:1081] duration metric: took 15.416408577s to wait for elevateKubeSystemPrivileges.
	I1127 23:36:55.515181   54577 kubeadm.go:406] StartCluster complete in 35.090484853s
	I1127 23:36:55.515201   54577 settings.go:142] acquiring lock: {Name:mk0fc8a58a3a281d2b922894958c44e0a802f6f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:36:55.515265   54577 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17206-2172/kubeconfig
	I1127 23:36:55.515938   54577 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/kubeconfig: {Name:mk7ba64d42902767d9bc759b2ed9230b4474c63d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:36:55.516147   54577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 23:36:55.516401   54577 config.go:182] Loaded profile config "ingress-addon-legacy-916543": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1127 23:36:55.516536   54577 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 23:36:55.516603   54577 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-916543"
	I1127 23:36:55.516619   54577 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-916543"
	I1127 23:36:55.516652   54577 host.go:66] Checking if "ingress-addon-legacy-916543" exists ...
	I1127 23:36:55.516666   54577 kapi.go:59] client config for ingress-addon-legacy-916543: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.key", CAFile:"/home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:36:55.517098   54577 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-916543 --format={{.State.Status}}
	I1127 23:36:55.517573   54577 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-916543"
	I1127 23:36:55.517591   54577 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-916543"
	I1127 23:36:55.517714   54577 cert_rotation.go:137] Starting client certificate rotation controller
	I1127 23:36:55.517867   54577 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-916543 --format={{.State.Status}}
	I1127 23:36:55.567775   54577 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-916543" context rescaled to 1 replicas
	I1127 23:36:55.567820   54577 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1127 23:36:55.570064   54577 out.go:177] * Verifying Kubernetes components...
	I1127 23:36:55.572164   54577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:36:55.574807   54577 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 23:36:55.571478   54577 kapi.go:59] client config for ingress-addon-legacy-916543: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.key", CAFile:"/home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:36:55.576915   54577 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:36:55.576946   54577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 23:36:55.577010   54577 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-916543"
	I1127 23:36:55.577045   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:55.577055   54577 host.go:66] Checking if "ingress-addon-legacy-916543" exists ...
	I1127 23:36:55.577596   54577 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-916543 --format={{.State.Status}}
	I1127 23:36:55.618361   54577 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 23:36:55.618383   54577 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 23:36:55.618441   54577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-916543
	I1127 23:36:55.619948   54577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/ingress-addon-legacy-916543/id_rsa Username:docker}
	I1127 23:36:55.644985   54577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/ingress-addon-legacy-916543/id_rsa Username:docker}
	I1127 23:36:55.781748   54577 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 23:36:55.782490   54577 kapi.go:59] client config for ingress-addon-legacy-916543: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt", KeyFile:"/home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.key", CAFile:"/home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c6420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1127 23:36:55.782816   54577 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-916543" to be "Ready" ...
	I1127 23:36:55.785946   54577 node_ready.go:49] node "ingress-addon-legacy-916543" has status "Ready":"True"
	I1127 23:36:55.785962   54577 node_ready.go:38] duration metric: took 3.10131ms waiting for node "ingress-addon-legacy-916543" to be "Ready" ...
	I1127 23:36:55.785971   54577 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:36:55.792330   54577 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-916543" in "kube-system" namespace to be "Ready" ...
	I1127 23:36:55.803661   54577 pod_ready.go:92] pod "etcd-ingress-addon-legacy-916543" in "kube-system" namespace has status "Ready":"True"
	I1127 23:36:55.803728   54577 pod_ready.go:81] duration metric: took 11.097901ms waiting for pod "etcd-ingress-addon-legacy-916543" in "kube-system" namespace to be "Ready" ...
	I1127 23:36:55.803757   54577 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-916543" in "kube-system" namespace to be "Ready" ...
	I1127 23:36:55.805384   54577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 23:36:55.834452   54577 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 23:36:55.869903   54577 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-916543" in "kube-system" namespace has status "Ready":"True"
	I1127 23:36:55.869962   54577 pod_ready.go:81] duration metric: took 66.177039ms waiting for pod "kube-apiserver-ingress-addon-legacy-916543" in "kube-system" namespace to be "Ready" ...
	I1127 23:36:55.870015   54577 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-916543" in "kube-system" namespace to be "Ready" ...
	I1127 23:36:55.883856   54577 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-916543" in "kube-system" namespace has status "Ready":"True"
	I1127 23:36:55.883923   54577 pod_ready.go:81] duration metric: took 13.875384ms waiting for pod "kube-controller-manager-ingress-addon-legacy-916543" in "kube-system" namespace to be "Ready" ...
	I1127 23:36:55.883948   54577 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-74gkc" in "kube-system" namespace to be "Ready" ...
	I1127 23:36:55.983592   54577 request.go:629] Waited for 87.173288ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-74gkc
	I1127 23:36:56.183644   54577 request.go:629] Waited for 196.327126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-916543
	I1127 23:36:56.672192   54577 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1127 23:36:56.690026   54577 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1127 23:36:56.692518   54577 addons.go:502] enable addons completed in 1.17597621s: enabled=[storage-provisioner default-storageclass]
	I1127 23:36:57.192529   54577 pod_ready.go:92] pod "kube-proxy-74gkc" in "kube-system" namespace has status "Ready":"True"
	I1127 23:36:57.192554   54577 pod_ready.go:81] duration metric: took 1.308581975s waiting for pod "kube-proxy-74gkc" in "kube-system" namespace to be "Ready" ...
	I1127 23:36:57.192565   54577 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-916543" in "kube-system" namespace to be "Ready" ...
	I1127 23:36:57.383743   54577 request.go:629] Waited for 189.070583ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-916543
	I1127 23:36:57.386133   54577 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-916543" in "kube-system" namespace has status "Ready":"True"
	I1127 23:36:57.386158   54577 pod_ready.go:81] duration metric: took 193.582665ms waiting for pod "kube-scheduler-ingress-addon-legacy-916543" in "kube-system" namespace to be "Ready" ...
	I1127 23:36:57.386168   54577 pod_ready.go:38] duration metric: took 1.60018712s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 23:36:57.386187   54577 api_server.go:52] waiting for apiserver process to appear ...
	I1127 23:36:57.386247   54577 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:36:57.399164   54577 api_server.go:72] duration metric: took 1.831312216s to wait for apiserver process to appear ...
	I1127 23:36:57.399183   54577 api_server.go:88] waiting for apiserver healthz status ...
	I1127 23:36:57.399199   54577 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1127 23:36:57.408060   54577 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1127 23:36:57.408865   54577 api_server.go:141] control plane version: v1.18.20
	I1127 23:36:57.408883   54577 api_server.go:131] duration metric: took 9.693176ms to wait for apiserver health ...
	I1127 23:36:57.408891   54577 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 23:36:57.583271   54577 request.go:629] Waited for 174.305649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:36:57.589379   54577 system_pods.go:59] 7 kube-system pods found
	I1127 23:36:57.589462   54577 system_pods.go:61] "coredns-66bff467f8-tf7db" [ebbbcf1f-abd3-4eee-a4d5-dfbbad501b21] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1127 23:36:57.589489   54577 system_pods.go:61] "etcd-ingress-addon-legacy-916543" [869bef6d-ed12-4464-b778-4c5df5c7b186] Running
	I1127 23:36:57.589515   54577 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-916543" [ca484640-1c99-467a-bd38-fd0331eae2ea] Running
	I1127 23:36:57.589551   54577 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-916543" [59bc179d-dd0a-4286-ae8f-ec45db309290] Running
	I1127 23:36:57.589575   54577 system_pods.go:61] "kube-proxy-74gkc" [86921c83-bf85-4a84-942e-ec44627bea27] Running
	I1127 23:36:57.589599   54577 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-916543" [1dff9507-47c2-4b8d-87e5-041e4dd52d8a] Running
	I1127 23:36:57.589638   54577 system_pods.go:61] "storage-provisioner" [4f35203a-ae86-4c06-9966-f070eae95ac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1127 23:36:57.589662   54577 system_pods.go:74] duration metric: took 180.765394ms to wait for pod list to return data ...
	I1127 23:36:57.589686   54577 default_sa.go:34] waiting for default service account to be created ...
	I1127 23:36:57.783036   54577 request.go:629] Waited for 193.25028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1127 23:36:57.785289   54577 default_sa.go:45] found service account: "default"
	I1127 23:36:57.785312   54577 default_sa.go:55] duration metric: took 195.586568ms for default service account to be created ...
	I1127 23:36:57.785322   54577 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 23:36:57.983644   54577 request.go:629] Waited for 198.265177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1127 23:36:57.988779   54577 system_pods.go:86] 7 kube-system pods found
	I1127 23:36:57.988810   54577 system_pods.go:89] "coredns-66bff467f8-tf7db" [ebbbcf1f-abd3-4eee-a4d5-dfbbad501b21] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1127 23:36:57.988818   54577 system_pods.go:89] "etcd-ingress-addon-legacy-916543" [869bef6d-ed12-4464-b778-4c5df5c7b186] Running
	I1127 23:36:57.988825   54577 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-916543" [ca484640-1c99-467a-bd38-fd0331eae2ea] Running
	I1127 23:36:57.988831   54577 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-916543" [59bc179d-dd0a-4286-ae8f-ec45db309290] Running
	I1127 23:36:57.988836   54577 system_pods.go:89] "kube-proxy-74gkc" [86921c83-bf85-4a84-942e-ec44627bea27] Running
	I1127 23:36:57.988848   54577 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-916543" [1dff9507-47c2-4b8d-87e5-041e4dd52d8a] Running
	I1127 23:36:57.988860   54577 system_pods.go:89] "storage-provisioner" [4f35203a-ae86-4c06-9966-f070eae95ac1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1127 23:36:57.988875   54577 system_pods.go:126] duration metric: took 203.547384ms to wait for k8s-apps to be running ...
	I1127 23:36:57.988883   54577 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 23:36:57.988947   54577 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:36:58.002388   54577 system_svc.go:56] duration metric: took 13.495818ms WaitForService to wait for kubelet.
	I1127 23:36:58.002432   54577 kubeadm.go:581] duration metric: took 2.43457252s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 23:36:58.002459   54577 node_conditions.go:102] verifying NodePressure condition ...
	I1127 23:36:58.183862   54577 request.go:629] Waited for 181.319464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1127 23:36:58.186970   54577 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1127 23:36:58.187118   54577 node_conditions.go:123] node cpu capacity is 2
	I1127 23:36:58.187130   54577 node_conditions.go:105] duration metric: took 184.659102ms to run NodePressure ...
	I1127 23:36:58.187142   54577 start.go:228] waiting for startup goroutines ...
	I1127 23:36:58.187153   54577 start.go:233] waiting for cluster config update ...
	I1127 23:36:58.187163   54577 start.go:242] writing updated cluster config ...
	I1127 23:36:58.187433   54577 ssh_runner.go:195] Run: rm -f paused
	I1127 23:36:58.254426   54577 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1127 23:36:58.258632   54577 out.go:177] 
	W1127 23:36:58.260689   54577 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1127 23:36:58.262467   54577 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1127 23:36:58.265058   54577 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-916543" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Nov 27 23:36:17 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:36:17.504077832Z" level=info msg="Daemon has completed initialization"
	Nov 27 23:36:17 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:36:17.530239413Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 27 23:36:17 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:36:17.530491190Z" level=info msg="API listen on [::]:2376"
	Nov 27 23:36:17 ingress-addon-legacy-916543 systemd[1]: Started Docker Application Container Engine.
	Nov 27 23:36:59 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:36:59.887733656Z" level=warning msg="reference for unknown type: " digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" remote="docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"
	Nov 27 23:37:01 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:01.379014702Z" level=info msg="ignoring event" container=78fb01cfd8acde46022a82b1ae1bbee6e408a3787cbfb0fc61b1384f83f5befc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:01 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:01.408233248Z" level=info msg="ignoring event" container=a98e08a7357d55eae0ac4489e9a73e7407e7524fd31ed286bad8be7f6171e613 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:02 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:02.033579985Z" level=info msg="ignoring event" container=f015c9e96e6d51f4ffea856c9e0115dcd082f5b318c1bcaa8106dabc36ff2122 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:02 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:02.189878438Z" level=info msg="ignoring event" container=43cd1f6ddaa3903b69bc3dd49e000cd3b9bdfedfd8b124113b3fbee068d4fe7e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:03 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:03.051029343Z" level=info msg="ignoring event" container=833cf3ab936bc43092586344839066dc0a48d7643f2f47694bed81d1d3fc7b32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:03 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:03.492222471Z" level=warning msg="reference for unknown type: " digest="sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324" remote="registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324"
	Nov 27 23:37:10 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:10.360087553Z" level=warning msg="Published ports are discarded when using host network mode"
	Nov 27 23:37:10 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:10.392060313Z" level=warning msg="Published ports are discarded when using host network mode"
	Nov 27 23:37:10 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:10.525146407Z" level=warning msg="reference for unknown type: " digest="sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" remote="docker.io/cryptexlabs/minikube-ingress-dns@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"
	Nov 27 23:37:16 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:16.524018740Z" level=info msg="ignoring event" container=2bd86b279c227267a381def42c9a11088b1ced97e760ff5741610f100ff56c44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:17 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:17.274574264Z" level=info msg="ignoring event" container=4193c26a67ac3c6217786dd260c1a155b1cb2cc31f3bb930296e3fbcf62a6b82 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:32 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:32.769100452Z" level=info msg="ignoring event" container=a459f9b8b9d359ce10534481df7420efbc72237d101a41f9bd369cf20d2ab4fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:36 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:36.597095892Z" level=info msg="ignoring event" container=91c74a6ee309b25c15e05e1289d750b699fecc22f397785713cd826b84d19ead module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:37 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:37.444894504Z" level=info msg="ignoring event" container=1ec06863c5266a32bf6a730105eada817660a12f41c25a1630da6791ba2c1490 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:49 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:49.652287538Z" level=info msg="ignoring event" container=ad096bd5879b5e49858db348028aac774635047136eab318667ef36da26a7108 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:53 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:53.816501391Z" level=info msg="ignoring event" container=c20b1e73547fdb786f750ce8116e09600a5b80fcbe25cc9b31d7b73f17430cac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:56 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:56.518167870Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=e650d4d7dff4caef26a641443901e0023b54df2a07be99f9486468370d56ec63
	Nov 27 23:37:56 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:56.532808413Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=e650d4d7dff4caef26a641443901e0023b54df2a07be99f9486468370d56ec63
	Nov 27 23:37:56 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:56.605515384Z" level=info msg="ignoring event" container=e650d4d7dff4caef26a641443901e0023b54df2a07be99f9486468370d56ec63 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 23:37:56 ingress-addon-legacy-916543 dockerd[1307]: time="2023-11-27T23:37:56.655327104Z" level=info msg="ignoring event" container=d4e0cc9a045da9ce513ff59f60915511ed16d96990f5679b44bebb94ced6d3c8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c20b1e73547fd       dd1b12fcb6097                                                                                                      9 seconds ago        Exited              hello-world-app           2                   c7ec2c1cb8d84       hello-world-app-5f5d8b66bb-hnthp
	15867e13a8e51       nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77                                      37 seconds ago       Running             nginx                     0                   ff72c712fa09b       nginx
	e650d4d7dff4c       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   54 seconds ago       Exited              controller                0                   d4e0cc9a045da       ingress-nginx-controller-7fcf777cb7-s7x7b
	43cd1f6ddaa39       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   833cf3ab936bc       ingress-nginx-admission-patch-94989
	a98e08a7357d5       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   f015c9e96e6d5       ingress-nginx-admission-create-plp2m
	5ef5f602f9e80       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   12c3cfa14f178       storage-provisioner
	bd8de93fb9c0f       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   4bc552c7fb239       coredns-66bff467f8-tf7db
	bc2e67b84b089       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   d7d06202ab609       kube-proxy-74gkc
	fbcdedbbcb36f       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   f7704ed0e730d       etcd-ingress-addon-legacy-916543
	e2eb4bd30d3f0       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   486412c8fa84d       kube-scheduler-ingress-addon-legacy-916543
	f52fc6b6f2ae6       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   f1527525d525f       kube-controller-manager-ingress-addon-legacy-916543
	aaf0b247d2e8c       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   9fa487a5f17ca       kube-apiserver-ingress-addon-legacy-916543
	
	* 
	* ==> coredns [bd8de93fb9c0] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	[INFO] Reloading complete
	[INFO] 127.0.0.1:36774 - 8461 "HINFO IN 6577825668181939340.6994713999217151235. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012162386s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-916543
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-916543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c8086df69b157f30f19d083fe45cc014f102df45
	                    minikube.k8s.io/name=ingress-addon-legacy-916543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T23_36_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 23:36:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-916543
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Nov 2023 23:37:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 23:37:43 +0000   Mon, 27 Nov 2023 23:36:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 23:37:43 +0000   Mon, 27 Nov 2023 23:36:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 23:37:43 +0000   Mon, 27 Nov 2023 23:36:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 23:37:43 +0000   Mon, 27 Nov 2023 23:36:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-916543
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 e1bdb07fa176488e9df661b1e18b93c9
	  System UUID:                af29b37e-4067-4425-b2a2-15bfde8310e0
	  Boot ID:                    78fd6d56-9be1-4fcf-98b8-4f12948f7c56
	  Kernel Version:             5.15.0-1050-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-hnthp                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 coredns-66bff467f8-tf7db                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     67s
	  kube-system                 etcd-ingress-addon-legacy-916543                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-apiserver-ingress-addon-legacy-916543             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-916543    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-proxy-74gkc                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kube-scheduler-ingress-addon-legacy-916543             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)   170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  NodeHasSufficientMemory  93s (x5 over 93s)  kubelet     Node ingress-addon-legacy-916543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    93s (x4 over 93s)  kubelet     Node ingress-addon-legacy-916543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     93s (x4 over 93s)  kubelet     Node ingress-addon-legacy-916543 status is now: NodeHasSufficientPID
	  Normal  Starting                 79s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s                kubelet     Node ingress-addon-legacy-916543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s                kubelet     Node ingress-addon-legacy-916543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s                kubelet     Node ingress-addon-legacy-916543 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                69s                kubelet     Node ingress-addon-legacy-916543 status is now: NodeReady
	  Normal  Starting                 66s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000726] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000981] FS-Cache: N-cookie d=0000000085ebe7e0{9p.inode} n=0000000027d3a868
	[  +0.001079] FS-Cache: N-key=[8] '966ced0000000000'
	[  +0.004276] FS-Cache: Duplicate cookie detected
	[  +0.000715] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001004] FS-Cache: O-cookie d=0000000085ebe7e0{9p.inode} n=0000000052fda981
	[  +0.001099] FS-Cache: O-key=[8] '966ced0000000000'
	[  +0.000729] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000964] FS-Cache: N-cookie d=0000000085ebe7e0{9p.inode} n=0000000029acb65c
	[  +0.001077] FS-Cache: N-key=[8] '966ced0000000000'
	[  +2.530082] FS-Cache: Duplicate cookie detected
	[  +0.000748] FS-Cache: O-cookie c=00000004 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=0000000085ebe7e0{9p.inode} n=0000000009e2db4f
	[  +0.001148] FS-Cache: O-key=[8] '956ced0000000000'
	[  +0.000780] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000988] FS-Cache: N-cookie d=0000000085ebe7e0{9p.inode} n=00000000b3ec31e9
	[  +0.001119] FS-Cache: N-key=[8] '956ced0000000000'
	[  +0.283728] FS-Cache: Duplicate cookie detected
	[  +0.000735] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.001036] FS-Cache: O-cookie d=0000000085ebe7e0{9p.inode} n=000000007026c5c8
	[  +0.001087] FS-Cache: O-key=[8] '9d6ced0000000000'
	[  +0.000726] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=0000000085ebe7e0{9p.inode} n=0000000027d3a868
	[  +0.001090] FS-Cache: N-key=[8] '9d6ced0000000000'
	[Nov27 23:36] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	* 
	* ==> etcd [fbcdedbbcb36] <==
	* raft2023/11/27 23:36:31 INFO: aec36adc501070cc became follower at term 0
	raft2023/11/27 23:36:31 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/27 23:36:31 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/27 23:36:31 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-27 23:36:32.505358 W | auth: simple token is not cryptographically signed
	2023-11-27 23:36:32.509032 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-27 23:36:32.516517 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-27 23:36:32.516888 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-27 23:36:32.517176 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-27 23:36:32.517374 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/11/27 23:36:32 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-27 23:36:32.518073 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/11/27 23:36:33 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/27 23:36:33 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/27 23:36:33 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/27 23:36:33 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/27 23:36:33 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-27 23:36:33.303220 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-27 23:36:33.303914 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-27 23:36:33.304055 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-27 23:36:33.304156 I | etcdserver: published {Name:ingress-addon-legacy-916543 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-27 23:36:33.304301 I | embed: ready to serve client requests
	2023-11-27 23:36:33.305709 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-27 23:36:33.306239 I | embed: ready to serve client requests
	2023-11-27 23:36:33.307555 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  23:38:02 up 20 min,  0 users,  load average: 1.93, 2.02, 1.22
	Linux ingress-addon-legacy-916543 5.15.0-1050-aws #55~20.04.1-Ubuntu SMP Mon Nov 6 12:18:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kube-apiserver [aaf0b247d2e8] <==
	* E1127 23:36:37.164559       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1127 23:36:37.321380       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1127 23:36:37.321422       1 cache.go:39] Caches are synced for autoregister controller
	I1127 23:36:37.322627       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1127 23:36:37.336185       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1127 23:36:37.357079       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1127 23:36:38.115843       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1127 23:36:38.115877       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1127 23:36:38.122978       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1127 23:36:38.128024       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1127 23:36:38.128047       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1127 23:36:38.484493       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1127 23:36:38.522832       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1127 23:36:38.646145       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1127 23:36:38.647144       1 controller.go:609] quota admission added evaluator for: endpoints
	I1127 23:36:38.650735       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1127 23:36:39.026193       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1127 23:36:39.578340       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1127 23:36:40.005130       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1127 23:36:40.063895       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1127 23:36:55.351410       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1127 23:36:55.800131       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1127 23:36:59.177794       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1127 23:37:21.250817       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1127 23:37:53.614018       1 watch.go:251] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoder{writer:(*http2.responseWriter)(0x400c747310), encoder:(*versioning.codec)(0x40062da280), buf:(*bytes.Buffer)(0x4004e41da0)})
	
	* 
	* ==> kube-controller-manager [f52fc6b6f2ae] <==
	* I1127 23:36:55.548330       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I1127 23:36:55.548355       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-916543", UID:"f308287d-7520-4088-94e9-2d6cd333b86b", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-916543 event: Registered Node ingress-addon-legacy-916543 in Controller
	I1127 23:36:55.548375       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1127 23:36:55.588464       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I1127 23:36:55.696996       1 shared_informer.go:230] Caches are synced for attach detach 
	I1127 23:36:55.797147       1 shared_informer.go:230] Caches are synced for deployment 
	I1127 23:36:55.815603       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"746efb45-263e-416a-b93a-173a87e26a34", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-tf7db
	I1127 23:36:55.822027       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"e0322958-95e1-463e-81a2-e0a39d29df6b", APIVersion:"apps/v1", ResourceVersion:"328", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I1127 23:36:55.848010       1 shared_informer.go:230] Caches are synced for disruption 
	I1127 23:36:55.848028       1 disruption.go:339] Sending events to api server.
	I1127 23:36:55.850471       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1127 23:36:55.850494       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1127 23:36:55.853043       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1127 23:36:55.898065       1 shared_informer.go:230] Caches are synced for resource quota 
	I1127 23:36:55.947173       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1127 23:36:56.047348       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I1127 23:36:56.047402       1 shared_informer.go:230] Caches are synced for resource quota 
	I1127 23:36:59.179937       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"06ffc68c-45af-48ab-a44b-5a4dd039136d", APIVersion:"apps/v1", ResourceVersion:"415", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1127 23:36:59.204341       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"a3eeb780-f90f-4fb5-9cf2-c59290862d9a", APIVersion:"batch/v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-plp2m
	I1127 23:36:59.204550       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"166649e0-fb23-456f-9a40-98c628a0ad10", APIVersion:"apps/v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-s7x7b
	I1127 23:36:59.232815       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b599d715-760d-4d49-bdaa-a06ea69a2367", APIVersion:"batch/v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-94989
	I1127 23:37:02.006630       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"a3eeb780-f90f-4fb5-9cf2-c59290862d9a", APIVersion:"batch/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1127 23:37:03.022977       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b599d715-760d-4d49-bdaa-a06ea69a2367", APIVersion:"batch/v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1127 23:37:33.011840       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"8823d942-c7ed-4919-9f90-4ebc2cd4d00b", APIVersion:"apps/v1", ResourceVersion:"563", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1127 23:37:33.030664       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"46c0876e-7ed3-4476-85ef-c159010e1f37", APIVersion:"apps/v1", ResourceVersion:"564", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-hnthp
	
	* 
	* ==> kube-proxy [bc2e67b84b08] <==
	* W1127 23:36:56.314090       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1127 23:36:56.329239       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1127 23:36:56.329284       1 server_others.go:186] Using iptables Proxier.
	I1127 23:36:56.329619       1 server.go:583] Version: v1.18.20
	I1127 23:36:56.338744       1 config.go:315] Starting service config controller
	I1127 23:36:56.338897       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1127 23:36:56.339444       1 config.go:133] Starting endpoints config controller
	I1127 23:36:56.339562       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1127 23:36:56.449611       1 shared_informer.go:230] Caches are synced for service config 
	I1127 23:36:56.449698       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [e2eb4bd30d3f] <==
	* W1127 23:36:37.271179       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1127 23:36:37.271297       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1127 23:36:37.271391       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1127 23:36:37.306874       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1127 23:36:37.306902       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1127 23:36:37.308820       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1127 23:36:37.308987       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1127 23:36:37.308999       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1127 23:36:37.309050       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1127 23:36:37.316277       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:36:37.318453       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 23:36:37.318707       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 23:36:37.318515       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1127 23:36:37.319005       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 23:36:37.319148       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 23:36:37.319327       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 23:36:37.319736       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 23:36:37.327980       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1127 23:36:37.335725       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 23:36:37.335802       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 23:36:37.335864       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 23:36:38.225446       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 23:36:38.347195       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 23:36:38.366033       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1127 23:36:41.009148       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Nov 27 23:37:39 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:39.344722    2859 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1ec06863c5266a32bf6a730105eada817660a12f41c25a1630da6791ba2c1490
	Nov 27 23:37:39 ingress-addon-legacy-916543 kubelet[2859]: E1127 23:37:39.344948    2859 pod_workers.go:191] Error syncing pod 2ec8e99b-33a0-48b6-ad60-0ff1fac36241 ("hello-world-app-5f5d8b66bb-hnthp_default(2ec8e99b-33a0-48b6-ad60-0ff1fac36241)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-hnthp_default(2ec8e99b-33a0-48b6-ad60-0ff1fac36241)"
	Nov 27 23:37:46 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:46.594996    2859 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a459f9b8b9d359ce10534481df7420efbc72237d101a41f9bd369cf20d2ab4fc
	Nov 27 23:37:46 ingress-addon-legacy-916543 kubelet[2859]: E1127 23:37:46.595317    2859 pod_workers.go:191] Error syncing pod 1e3af2cb-2daa-42ef-962d-6bc2f192b50c ("kube-ingress-dns-minikube_kube-system(1e3af2cb-2daa-42ef-962d-6bc2f192b50c)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(1e3af2cb-2daa-42ef-962d-6bc2f192b50c)"
	Nov 27 23:37:48 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:48.821742    2859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-t6bkp" (UniqueName: "kubernetes.io/secret/1e3af2cb-2daa-42ef-962d-6bc2f192b50c-minikube-ingress-dns-token-t6bkp") pod "1e3af2cb-2daa-42ef-962d-6bc2f192b50c" (UID: "1e3af2cb-2daa-42ef-962d-6bc2f192b50c")
	Nov 27 23:37:48 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:48.831113    2859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e3af2cb-2daa-42ef-962d-6bc2f192b50c-minikube-ingress-dns-token-t6bkp" (OuterVolumeSpecName: "minikube-ingress-dns-token-t6bkp") pod "1e3af2cb-2daa-42ef-962d-6bc2f192b50c" (UID: "1e3af2cb-2daa-42ef-962d-6bc2f192b50c"). InnerVolumeSpecName "minikube-ingress-dns-token-t6bkp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:37:48 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:48.921989    2859 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-t6bkp" (UniqueName: "kubernetes.io/secret/1e3af2cb-2daa-42ef-962d-6bc2f192b50c-minikube-ingress-dns-token-t6bkp") on node "ingress-addon-legacy-916543" DevicePath ""
	Nov 27 23:37:50 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:50.424603    2859 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: a459f9b8b9d359ce10534481df7420efbc72237d101a41f9bd369cf20d2ab4fc
	Nov 27 23:37:53 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:53.598216    2859 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1ec06863c5266a32bf6a730105eada817660a12f41c25a1630da6791ba2c1490
	Nov 27 23:37:53 ingress-addon-legacy-916543 kubelet[2859]: W1127 23:37:53.849991    2859 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod2ec8e99b-33a0-48b6-ad60-0ff1fac36241/c20b1e73547fdb786f750ce8116e09600a5b80fcbe25cc9b31d7b73f17430cac": none of the resources are being tracked.
	Nov 27 23:37:54 ingress-addon-legacy-916543 kubelet[2859]: W1127 23:37:54.457725    2859 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-hnthp through plugin: invalid network status for
	Nov 27 23:37:54 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:54.464740    2859 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1ec06863c5266a32bf6a730105eada817660a12f41c25a1630da6791ba2c1490
	Nov 27 23:37:54 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:54.465098    2859 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c20b1e73547fdb786f750ce8116e09600a5b80fcbe25cc9b31d7b73f17430cac
	Nov 27 23:37:54 ingress-addon-legacy-916543 kubelet[2859]: E1127 23:37:54.465350    2859 pod_workers.go:191] Error syncing pod 2ec8e99b-33a0-48b6-ad60-0ff1fac36241 ("hello-world-app-5f5d8b66bb-hnthp_default(2ec8e99b-33a0-48b6-ad60-0ff1fac36241)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-hnthp_default(2ec8e99b-33a0-48b6-ad60-0ff1fac36241)"
	Nov 27 23:37:54 ingress-addon-legacy-916543 kubelet[2859]: E1127 23:37:54.504435    2859 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-s7x7b.179b9f2716d6b49e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-s7x7b", UID:"11adcf04-c069-4727-9f81-7e6818bc2271", APIVersion:"v1", ResourceVersion:"427", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-916543"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15167f49db2009e, ext:74591457074, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15167f49db2009e, ext:74591457074, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-s7x7b.179b9f2716d6b49e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 27 23:37:54 ingress-addon-legacy-916543 kubelet[2859]: E1127 23:37:54.514384    2859 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-s7x7b.179b9f2716d6b49e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-s7x7b", UID:"11adcf04-c069-4727-9f81-7e6818bc2271", APIVersion:"v1", ResourceVersion:"427", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-916543"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15167f49db2009e, ext:74591457074, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15167f49e71d09e, ext:74604027698, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-s7x7b.179b9f2716d6b49e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 27 23:37:55 ingress-addon-legacy-916543 kubelet[2859]: W1127 23:37:55.472650    2859 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-hnthp through plugin: invalid network status for
	Nov 27 23:37:57 ingress-addon-legacy-916543 kubelet[2859]: W1127 23:37:57.493886    2859 pod_container_deletor.go:77] Container "d4e0cc9a045da9ce513ff59f60915511ed16d96990f5679b44bebb94ced6d3c8" not found in pod's containers
	Nov 27 23:37:58 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:58.644073    2859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-lg866" (UniqueName: "kubernetes.io/secret/11adcf04-c069-4727-9f81-7e6818bc2271-ingress-nginx-token-lg866") pod "11adcf04-c069-4727-9f81-7e6818bc2271" (UID: "11adcf04-c069-4727-9f81-7e6818bc2271")
	Nov 27 23:37:58 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:58.644132    2859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/11adcf04-c069-4727-9f81-7e6818bc2271-webhook-cert") pod "11adcf04-c069-4727-9f81-7e6818bc2271" (UID: "11adcf04-c069-4727-9f81-7e6818bc2271")
	Nov 27 23:37:58 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:58.650013    2859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11adcf04-c069-4727-9f81-7e6818bc2271-ingress-nginx-token-lg866" (OuterVolumeSpecName: "ingress-nginx-token-lg866") pod "11adcf04-c069-4727-9f81-7e6818bc2271" (UID: "11adcf04-c069-4727-9f81-7e6818bc2271"). InnerVolumeSpecName "ingress-nginx-token-lg866". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:37:58 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:58.650747    2859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11adcf04-c069-4727-9f81-7e6818bc2271-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "11adcf04-c069-4727-9f81-7e6818bc2271" (UID: "11adcf04-c069-4727-9f81-7e6818bc2271"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 27 23:37:58 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:58.744418    2859 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/11adcf04-c069-4727-9f81-7e6818bc2271-webhook-cert") on node "ingress-addon-legacy-916543" DevicePath ""
	Nov 27 23:37:58 ingress-addon-legacy-916543 kubelet[2859]: I1127 23:37:58.744464    2859 reconciler.go:319] Volume detached for volume "ingress-nginx-token-lg866" (UniqueName: "kubernetes.io/secret/11adcf04-c069-4727-9f81-7e6818bc2271-ingress-nginx-token-lg866") on node "ingress-addon-legacy-916543" DevicePath ""
	Nov 27 23:37:59 ingress-addon-legacy-916543 kubelet[2859]: W1127 23:37:59.608603    2859 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/11adcf04-c069-4727-9f81-7e6818bc2271/volumes" does not exist
	
	* 
	* ==> storage-provisioner [5ef5f602f9e8] <==
	* I1127 23:36:58.839789       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 23:36:58.857224       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 23:36:58.857320       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 23:36:58.867081       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 23:36:58.868314       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-916543_1811d203-b2fb-40f0-bf10-22f8410ef7ef!
	I1127 23:36:58.876236       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a9ba5805-7d70-42ed-8f00-a273696853a4", APIVersion:"v1", ResourceVersion:"379", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-916543_1811d203-b2fb-40f0-bf10-22f8410ef7ef became leader
	I1127 23:36:58.968948       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-916543_1811d203-b2fb-40f0-bf10-22f8410ef7ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-916543 -n ingress-addon-legacy-916543
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-916543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (53.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (509.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-172365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1128 00:31:37.725506    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:37.730999    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:37.741353    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:37.761707    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:37.802014    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:37.882367    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:38.042729    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:38.363455    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:39.004432    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:40.285497    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:42.845695    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:44.062382    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:31:47.965841    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:31:53.074521    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1128 00:31:58.206789    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:32:01.413509    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:32:10.027682    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1128 00:32:10.500963    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:32:15.820048    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:32:18.687723    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:32:34.512639    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:32:59.648574    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:33:00.391866    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1128 00:33:05.982858    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:33:17.075371    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:33:33.546830    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:33:34.397805    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:33:37.491458    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p default-k8s-diff-port-172365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: exit status 80 (8m25.972787238s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-172365] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-diff-port-172365 in cluster default-k8s-diff-port-172365
	* Pulling base image ...
	* Restarting existing docker container for "default-k8s-diff-port-172365" ...
	* Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 00:31:29.329121  378575 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:31:29.329269  378575 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:31:29.329278  378575 out.go:309] Setting ErrFile to fd 2...
	I1128 00:31:29.329284  378575 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:31:29.329540  378575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
	I1128 00:31:29.329884  378575 out.go:303] Setting JSON to false
	I1128 00:31:29.331149  378575 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":4439,"bootTime":1701127051,"procs":432,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1128 00:31:29.331219  378575 start.go:138] virtualization:  
	I1128 00:31:29.333721  378575 out.go:177] * [default-k8s-diff-port-172365] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1128 00:31:29.336186  378575 out.go:177]   - MINIKUBE_LOCATION=17206
	I1128 00:31:29.338026  378575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 00:31:29.336365  378575 notify.go:220] Checking for updates...
	I1128 00:31:29.339748  378575 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	I1128 00:31:29.341686  378575 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	I1128 00:31:29.343565  378575 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1128 00:31:29.345311  378575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 00:31:29.347640  378575 config.go:182] Loaded profile config "default-k8s-diff-port-172365": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1128 00:31:29.348126  378575 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 00:31:29.370891  378575 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1128 00:31:29.370994  378575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 00:31:29.448668  378575 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 00:31:29.439304956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 00:31:29.448770  378575 docker.go:295] overlay module found
	I1128 00:31:29.450823  378575 out.go:177] * Using the docker driver based on existing profile
	I1128 00:31:29.452696  378575 start.go:298] selected driver: docker
	I1128 00:31:29.452711  378575 start.go:902] validating driver "docker" against &{Name:default-k8s-diff-port-172365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-172365 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:31:29.452807  378575 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 00:31:29.453454  378575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 00:31:29.529781  378575 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 00:31:29.520268924 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 00:31:29.530103  378575 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 00:31:29.530173  378575 cni.go:84] Creating CNI manager for ""
	I1128 00:31:29.530190  378575 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1128 00:31:29.530207  378575 start_flags.go:323] config:
	{Name:default-k8s-diff-port-172365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-172365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:31:29.533408  378575 out.go:177] * Starting control plane node default-k8s-diff-port-172365 in cluster default-k8s-diff-port-172365
	I1128 00:31:29.535126  378575 cache.go:121] Beginning downloading kic base image for docker with docker
	I1128 00:31:29.537122  378575 out.go:177] * Pulling base image ...
	I1128 00:31:29.539283  378575 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1128 00:31:29.539304  378575 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1128 00:31:29.539322  378575 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1128 00:31:29.539330  378575 cache.go:56] Caching tarball of preloaded images
	I1128 00:31:29.539399  378575 preload.go:174] Found /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1128 00:31:29.539407  378575 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1128 00:31:29.539515  378575 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/default-k8s-diff-port-172365/config.json ...
	I1128 00:31:29.556703  378575 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon, skipping pull
	I1128 00:31:29.556728  378575 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in daemon, skipping load
	I1128 00:31:29.556749  378575 cache.go:194] Successfully downloaded all kic artifacts
	I1128 00:31:29.556794  378575 start.go:365] acquiring machines lock for default-k8s-diff-port-172365: {Name:mk4b83725b6892d32e606899dd8518eb305c458d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 00:31:29.556868  378575 start.go:369] acquired machines lock for "default-k8s-diff-port-172365" in 45.884µs
	I1128 00:31:29.556898  378575 start.go:96] Skipping create...Using existing machine configuration
	I1128 00:31:29.556908  378575 fix.go:54] fixHost starting: 
	I1128 00:31:29.557169  378575 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172365 --format={{.State.Status}}
	I1128 00:31:29.585992  378575 fix.go:102] recreateIfNeeded on default-k8s-diff-port-172365: state=Stopped err=<nil>
	W1128 00:31:29.586022  378575 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 00:31:29.588337  378575 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-172365" ...
	I1128 00:31:29.590575  378575 cli_runner.go:164] Run: docker start default-k8s-diff-port-172365
	I1128 00:31:29.897391  378575 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172365 --format={{.State.Status}}
	I1128 00:31:29.925180  378575 kic.go:430] container "default-k8s-diff-port-172365" state is running.
	I1128 00:31:29.925536  378575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-172365
	I1128 00:31:29.953904  378575 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/default-k8s-diff-port-172365/config.json ...
	I1128 00:31:29.954116  378575 machine.go:88] provisioning docker machine ...
	I1128 00:31:29.954136  378575 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-172365"
	I1128 00:31:29.954231  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:31:29.973290  378575 main.go:141] libmachine: Using SSH client type: native
	I1128 00:31:29.973755  378575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1128 00:31:29.973770  378575 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-172365 && echo "default-k8s-diff-port-172365" | sudo tee /etc/hostname
	I1128 00:31:29.974443  378575 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1128 00:31:33.129219  378575 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-172365
	
	I1128 00:31:33.129304  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:31:33.147182  378575 main.go:141] libmachine: Using SSH client type: native
	I1128 00:31:33.147586  378575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1128 00:31:33.147611  378575 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-172365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-172365/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-172365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 00:31:33.277011  378575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:31:33.277036  378575 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17206-2172/.minikube CaCertPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17206-2172/.minikube}
	I1128 00:31:33.277069  378575 ubuntu.go:177] setting up certificates
	I1128 00:31:33.277078  378575 provision.go:83] configureAuth start
	I1128 00:31:33.277139  378575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-172365
	I1128 00:31:33.296017  378575 provision.go:138] copyHostCerts
	I1128 00:31:33.296076  378575 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-2172/.minikube/ca.pem, removing ...
	I1128 00:31:33.296100  378575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-2172/.minikube/ca.pem
	I1128 00:31:33.296173  378575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17206-2172/.minikube/ca.pem (1078 bytes)
	I1128 00:31:33.296262  378575 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-2172/.minikube/cert.pem, removing ...
	I1128 00:31:33.296272  378575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-2172/.minikube/cert.pem
	I1128 00:31:33.296298  378575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17206-2172/.minikube/cert.pem (1123 bytes)
	I1128 00:31:33.296638  378575 exec_runner.go:144] found /home/jenkins/minikube-integration/17206-2172/.minikube/key.pem, removing ...
	I1128 00:31:33.296652  378575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17206-2172/.minikube/key.pem
	I1128 00:31:33.296686  378575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17206-2172/.minikube/key.pem (1679 bytes)
	I1128 00:31:33.296747  378575 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17206-2172/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-172365 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-172365]
	I1128 00:31:33.483498  378575 provision.go:172] copyRemoteCerts
	I1128 00:31:33.483594  378575 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 00:31:33.483648  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:31:33.502549  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	I1128 00:31:33.596672  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 00:31:33.623527  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1128 00:31:33.651196  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 00:31:33.679054  378575 provision.go:86] duration metric: configureAuth took 401.957814ms
	I1128 00:31:33.679081  378575 ubuntu.go:193] setting minikube options for container-runtime
	I1128 00:31:33.679280  378575 config.go:182] Loaded profile config "default-k8s-diff-port-172365": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1128 00:31:33.679340  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:31:33.696591  378575 main.go:141] libmachine: Using SSH client type: native
	I1128 00:31:33.696994  378575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1128 00:31:33.697011  378575 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1128 00:31:33.831825  378575 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1128 00:31:33.831845  378575 ubuntu.go:71] root file system type: overlay
	I1128 00:31:33.831954  378575 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1128 00:31:33.832021  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:31:33.849953  378575 main.go:141] libmachine: Using SSH client type: native
	I1128 00:31:33.850392  378575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1128 00:31:33.850473  378575 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1128 00:31:33.992367  378575 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1128 00:31:33.992500  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:31:34.011441  378575 main.go:141] libmachine: Using SSH client type: native
	I1128 00:31:34.011885  378575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3be600] 0x3c0d70 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1128 00:31:34.011912  378575 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1128 00:31:34.145175  378575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 00:31:34.145234  378575 machine.go:91] provisioned docker machine in 4.191101023s
	I1128 00:31:34.145262  378575 start.go:300] post-start starting for "default-k8s-diff-port-172365" (driver="docker")
	I1128 00:31:34.145309  378575 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 00:31:34.145389  378575 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 00:31:34.145460  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:31:34.163343  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	I1128 00:31:34.256794  378575 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 00:31:34.260658  378575 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1128 00:31:34.260693  378575 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1128 00:31:34.260705  378575 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1128 00:31:34.260713  378575 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1128 00:31:34.260726  378575 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-2172/.minikube/addons for local assets ...
	I1128 00:31:34.260781  378575 filesync.go:126] Scanning /home/jenkins/minikube-integration/17206-2172/.minikube/files for local assets ...
	I1128 00:31:34.260868  378575 filesync.go:149] local asset: /home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/74602.pem -> 74602.pem in /etc/ssl/certs
	I1128 00:31:34.260975  378575 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 00:31:34.271655  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/74602.pem --> /etc/ssl/certs/74602.pem (1708 bytes)
	I1128 00:31:34.304112  378575 start.go:303] post-start completed in 158.81296ms
	I1128 00:31:34.304196  378575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 00:31:34.304243  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:31:34.323481  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	I1128 00:31:34.416541  378575 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1128 00:31:34.422062  378575 fix.go:56] fixHost completed within 4.865149687s
	I1128 00:31:34.422086  378575 start.go:83] releasing machines lock for "default-k8s-diff-port-172365", held for 4.865204063s
	I1128 00:31:34.422152  378575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-172365
	I1128 00:31:34.441932  378575 ssh_runner.go:195] Run: cat /version.json
	I1128 00:31:34.441993  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:31:34.442233  378575 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 00:31:34.442268  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:31:34.465088  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	I1128 00:31:34.476861  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	I1128 00:31:34.558864  378575 ssh_runner.go:195] Run: systemctl --version
	I1128 00:31:34.692390  378575 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 00:31:34.697904  378575 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1128 00:31:34.719602  378575 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1128 00:31:34.719672  378575 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 00:31:34.729538  378575 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1128 00:31:34.729560  378575 start.go:472] detecting cgroup driver to use...
	I1128 00:31:34.729588  378575 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 00:31:34.729687  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:31:34.754704  378575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1128 00:31:34.765448  378575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1128 00:31:34.776999  378575 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1128 00:31:34.777061  378575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1128 00:31:34.788061  378575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1128 00:31:34.799454  378575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1128 00:31:34.810726  378575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1128 00:31:34.821245  378575 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 00:31:34.831090  378575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1128 00:31:34.841895  378575 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 00:31:34.851253  378575 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 00:31:34.860487  378575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:31:34.954458  378575 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1128 00:31:35.056860  378575 start.go:472] detecting cgroup driver to use...
	I1128 00:31:35.056951  378575 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1128 00:31:35.057034  378575 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1128 00:31:35.076367  378575 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I1128 00:31:35.076483  378575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1128 00:31:35.097432  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 00:31:35.119851  378575 ssh_runner.go:195] Run: which cri-dockerd
	I1128 00:31:35.124605  378575 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1128 00:31:35.136986  378575 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1128 00:31:35.163221  378575 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1128 00:31:35.285725  378575 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1128 00:31:35.402680  378575 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1128 00:31:35.402799  378575 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1128 00:31:35.425019  378575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:31:35.545948  378575 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1128 00:31:35.897200  378575 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1128 00:31:35.989578  378575 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1128 00:31:36.085911  378575 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1128 00:31:36.184245  378575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:31:36.278850  378575 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1128 00:31:36.296521  378575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 00:31:36.406246  378575 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1128 00:31:36.493050  378575 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1128 00:31:36.493150  378575 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1128 00:31:36.498598  378575 start.go:540] Will wait 60s for crictl version
	I1128 00:31:36.498679  378575 ssh_runner.go:195] Run: which crictl
	I1128 00:31:36.504296  378575 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 00:31:36.560626  378575 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1128 00:31:36.560741  378575 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1128 00:31:36.588247  378575 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1128 00:31:36.619845  378575 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1128 00:31:36.619948  378575 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-172365 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1128 00:31:36.638167  378575 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1128 00:31:36.642728  378575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:31:36.655452  378575 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1128 00:31:36.655516  378575 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1128 00:31:36.681365  378575 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1128 00:31:36.681389  378575 docker.go:601] Images already preloaded, skipping extraction
	I1128 00:31:36.681454  378575 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1128 00:31:36.702901  378575 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1128 00:31:36.702926  378575 cache_images.go:84] Images are preloaded, skipping loading
	I1128 00:31:36.702992  378575 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1128 00:31:36.768464  378575 cni.go:84] Creating CNI manager for ""
	I1128 00:31:36.768491  378575 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1128 00:31:36.768511  378575 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 00:31:36.768529  378575 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-172365 NodeName:default-k8s-diff-port-172365 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 00:31:36.768666  378575 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-172365"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 00:31:36.768744  378575 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-172365 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-172365 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1128 00:31:36.768807  378575 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 00:31:36.779383  378575 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 00:31:36.779441  378575 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 00:31:36.788768  378575 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (389 bytes)
	I1128 00:31:36.809015  378575 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 00:31:36.828836  378575 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
	I1128 00:31:36.849079  378575 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1128 00:31:36.853204  378575 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 00:31:36.865784  378575 certs.go:56] Setting up /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/default-k8s-diff-port-172365 for IP: 192.168.76.2
	I1128 00:31:36.865809  378575 certs.go:190] acquiring lock for shared ca certs: {Name:mkf476800f388ef5f0e09831530252d4aaf23bd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:31:36.865960  378575 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17206-2172/.minikube/ca.key
	I1128 00:31:36.866009  378575 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.key
	I1128 00:31:36.866088  378575 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/default-k8s-diff-port-172365/client.key
	I1128 00:31:36.866152  378575 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/default-k8s-diff-port-172365/apiserver.key.31bdca25
	I1128 00:31:36.866198  378575 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/default-k8s-diff-port-172365/proxy-client.key
	I1128 00:31:36.866339  378575 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/7460.pem (1338 bytes)
	W1128 00:31:36.866375  378575 certs.go:433] ignoring /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/7460_empty.pem, impossibly tiny 0 bytes
	I1128 00:31:36.866387  378575 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca-key.pem (1675 bytes)
	I1128 00:31:36.866413  378575 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/ca.pem (1078 bytes)
	I1128 00:31:36.866441  378575 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/cert.pem (1123 bytes)
	I1128 00:31:36.866471  378575 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/certs/home/jenkins/minikube-integration/17206-2172/.minikube/certs/key.pem (1679 bytes)
	I1128 00:31:36.866518  378575 certs.go:437] found cert: /home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/74602.pem (1708 bytes)
	I1128 00:31:36.867202  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/default-k8s-diff-port-172365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 00:31:36.898772  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/default-k8s-diff-port-172365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 00:31:36.924768  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/default-k8s-diff-port-172365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 00:31:36.952102  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/default-k8s-diff-port-172365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 00:31:36.979406  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 00:31:37.007235  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 00:31:37.040643  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 00:31:37.067732  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 00:31:37.094581  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 00:31:37.120762  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/certs/7460.pem --> /usr/share/ca-certificates/7460.pem (1338 bytes)
	I1128 00:31:37.148116  378575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/ssl/certs/74602.pem --> /usr/share/ca-certificates/74602.pem (1708 bytes)
	I1128 00:31:37.173911  378575 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 00:31:37.192912  378575 ssh_runner.go:195] Run: openssl version
	I1128 00:31:37.199641  378575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 00:31:37.210922  378575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:31:37.215360  378575 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 23:26 /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:31:37.215419  378575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 00:31:37.223579  378575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 00:31:37.233607  378575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7460.pem && ln -fs /usr/share/ca-certificates/7460.pem /etc/ssl/certs/7460.pem"
	I1128 00:31:37.243869  378575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7460.pem
	I1128 00:31:37.248960  378575 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 23:30 /usr/share/ca-certificates/7460.pem
	I1128 00:31:37.249070  378575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7460.pem
	I1128 00:31:37.257240  378575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7460.pem /etc/ssl/certs/51391683.0"
	I1128 00:31:37.267595  378575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/74602.pem && ln -fs /usr/share/ca-certificates/74602.pem /etc/ssl/certs/74602.pem"
	I1128 00:31:37.279486  378575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/74602.pem
	I1128 00:31:37.284034  378575 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 23:30 /usr/share/ca-certificates/74602.pem
	I1128 00:31:37.284144  378575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/74602.pem
	I1128 00:31:37.292864  378575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/74602.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 00:31:37.303318  378575 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 00:31:37.307624  378575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 00:31:37.317462  378575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 00:31:37.326474  378575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 00:31:37.334360  378575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 00:31:37.342600  378575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 00:31:37.350363  378575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 00:31:37.358185  378575 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-172365 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-172365 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 00:31:37.358343  378575 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1128 00:31:37.377921  378575 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 00:31:37.388347  378575 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 00:31:37.388367  378575 kubeadm.go:636] restartCluster start
	I1128 00:31:37.388417  378575 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 00:31:37.397490  378575 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:37.398144  378575 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-172365" does not appear in /home/jenkins/minikube-integration/17206-2172/kubeconfig
	I1128 00:31:37.398406  378575 kubeconfig.go:146] "default-k8s-diff-port-172365" context is missing from /home/jenkins/minikube-integration/17206-2172/kubeconfig - will repair!
	I1128 00:31:37.398856  378575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/kubeconfig: {Name:mk7ba64d42902767d9bc759b2ed9230b4474c63d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 00:31:37.400480  378575 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 00:31:37.410248  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:37.410425  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:37.421898  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:37.421958  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:37.422013  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:37.432859  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:37.933529  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:37.933611  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:37.945285  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:38.433735  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:38.433812  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:38.445505  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:38.933104  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:38.933212  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:38.946140  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:39.434003  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:39.434080  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:39.445481  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:39.933075  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:39.933169  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:39.945835  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:40.433028  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:40.433115  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:40.445239  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:40.933729  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:40.933812  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:40.946416  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:41.432979  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:41.433093  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:41.444730  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:41.933029  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:41.933126  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:41.945776  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:42.433027  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:42.433108  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:42.445143  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:42.933677  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:42.933771  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:42.945735  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:43.433009  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:43.433096  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:43.445174  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:43.933808  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:43.933892  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:43.947008  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:44.433512  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:44.433606  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:44.445395  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:44.933925  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:44.934023  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:44.946765  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:45.433269  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:45.433367  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:45.444948  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:45.933026  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:45.933104  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:45.945440  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:46.433062  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:46.433139  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:46.444638  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:46.933199  378575 api_server.go:166] Checking apiserver status ...
	I1128 00:31:46.933289  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 00:31:46.945298  378575 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:47.410999  378575 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 00:31:47.411028  378575 kubeadm.go:1128] stopping kube-system containers ...
	I1128 00:31:47.411129  378575 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1128 00:31:47.432384  378575 docker.go:469] Stopping containers: [532ebf4ade6f 2539a0da1b37 1dd827f86dfe 36edcfcab466 7ead955bef80 475303d67228 8b6566cb4b30 3348ca17f74d 74acc3ddaa17 7557af00d307 107e1f2402ae e7f3404c4ffc 2d2553ce397e cde9634dfa99 482e54c817a4]
	I1128 00:31:47.432452  378575 ssh_runner.go:195] Run: docker stop 532ebf4ade6f 2539a0da1b37 1dd827f86dfe 36edcfcab466 7ead955bef80 475303d67228 8b6566cb4b30 3348ca17f74d 74acc3ddaa17 7557af00d307 107e1f2402ae e7f3404c4ffc 2d2553ce397e cde9634dfa99 482e54c817a4
	I1128 00:31:47.453541  378575 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 00:31:47.467899  378575 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 00:31:47.477871  378575 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov 28 00:30 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov 28 00:30 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2051 Nov 28 00:30 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov 28 00:30 /etc/kubernetes/scheduler.conf
	
	I1128 00:31:47.477929  378575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1128 00:31:47.487973  378575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1128 00:31:47.497383  378575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1128 00:31:47.506714  378575 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:47.506772  378575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1128 00:31:47.516272  378575 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1128 00:31:47.527569  378575 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1128 00:31:47.527623  378575 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1128 00:31:47.537165  378575 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:31:47.546950  378575 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 00:31:47.546973  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:31:47.608994  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:31:48.684577  378575 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.075546864s)
	I1128 00:31:48.684611  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:31:48.863801  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:31:48.937267  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:31:49.029411  378575 api_server.go:52] waiting for apiserver process to appear ...
	I1128 00:31:49.029487  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:31:49.062166  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:31:49.576778  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:31:50.075831  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:31:50.576628  378575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 00:31:50.600193  378575 api_server.go:72] duration metric: took 1.5707816s to wait for apiserver process to appear ...
	I1128 00:31:50.600215  378575 api_server.go:88] waiting for apiserver healthz status ...
	I1128 00:31:50.600249  378575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1128 00:31:54.294018  378575 api_server.go:279] https://192.168.76.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:31:54.294095  378575 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:31:54.294121  378575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1128 00:31:54.344694  378575 api_server.go:279] https://192.168.76.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 00:31:54.344727  378575 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 00:31:54.845373  378575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1128 00:31:54.855364  378575 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:31:54.855442  378575 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:31:55.344806  378575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1128 00:31:55.360137  378575 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 00:31:55.360223  378575 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 00:31:55.845625  378575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1128 00:31:55.862271  378575 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1128 00:31:55.883798  378575 api_server.go:141] control plane version: v1.28.4
	I1128 00:31:55.883862  378575 api_server.go:131] duration metric: took 5.283640059s to wait for apiserver health ...
	I1128 00:31:55.883894  378575 cni.go:84] Creating CNI manager for ""
	I1128 00:31:55.883940  378575 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1128 00:31:55.886375  378575 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 00:31:55.888524  378575 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 00:31:55.924986  378575 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 00:31:55.968492  378575 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 00:31:55.985410  378575 system_pods.go:59] 8 kube-system pods found
	I1128 00:31:55.985483  378575 system_pods.go:61] "coredns-5dd5756b68-d6kb4" [d1cf8657-174c-4423-bad4-deb14153c869] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 00:31:55.985513  378575 system_pods.go:61] "etcd-default-k8s-diff-port-172365" [17fc796e-595a-432b-83b3-ed3bc868b97f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 00:31:55.985554  378575 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-172365" [2f624c70-0c63-476d-ba42-127e64f476cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 00:31:55.985586  378575 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-172365" [fd51c6a1-aa51-4452-baac-2b79d093eaaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 00:31:55.985622  378575 system_pods.go:61] "kube-proxy-ct7pn" [327b8614-c459-4aae-b4d2-bcdf65aaf738] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 00:31:55.985660  378575 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-172365" [b33de1f9-6bac-4951-b088-552b0e9ba838] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 00:31:55.985686  378575 system_pods.go:61] "metrics-server-57f55c9bc5-6d6b9" [58cb6357-8e0e-4294-9f4d-75fe2f36ac3b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 00:31:55.985712  378575 system_pods.go:61] "storage-provisioner" [4425cacc-e9c6-497a-9ded-9bf375c3c82f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 00:31:55.985736  378575 system_pods.go:74] duration metric: took 17.184189ms to wait for pod list to return data ...
	I1128 00:31:55.985776  378575 node_conditions.go:102] verifying NodePressure condition ...
	I1128 00:31:55.994219  378575 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1128 00:31:55.994292  378575 node_conditions.go:123] node cpu capacity is 2
	I1128 00:31:55.994336  378575 node_conditions.go:105] duration metric: took 8.539992ms to run NodePressure ...
	I1128 00:31:55.994376  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 00:31:56.477807  378575 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 00:31:56.485629  378575 kubeadm.go:787] kubelet initialised
	I1128 00:31:56.485697  378575 kubeadm.go:788] duration metric: took 7.86731ms waiting for restarted kubelet to initialise ...
	I1128 00:31:56.485720  378575 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:31:56.495970  378575 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace to be "Ready" ...
	I1128 00:31:58.543556  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:00.545697  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:03.044805  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:05.544397  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:07.544652  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:10.044295  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:12.543530  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:15.044386  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:17.543966  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:19.544115  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:21.544319  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:24.043218  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:26.044121  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:28.544172  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:31.043449  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:33.044111  378575 pod_ready.go:102] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:34.543362  378575 pod_ready.go:92] pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace has status "Ready":"True"
	I1128 00:32:34.543392  378575 pod_ready.go:81] duration metric: took 38.047365096s waiting for pod "coredns-5dd5756b68-d6kb4" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:34.543403  378575 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-172365" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:34.548491  378575 pod_ready.go:92] pod "etcd-default-k8s-diff-port-172365" in "kube-system" namespace has status "Ready":"True"
	I1128 00:32:34.548549  378575 pod_ready.go:81] duration metric: took 5.138209ms waiting for pod "etcd-default-k8s-diff-port-172365" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:34.548575  378575 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-172365" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:34.563116  378575 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-172365" in "kube-system" namespace has status "Ready":"True"
	I1128 00:32:34.563182  378575 pod_ready.go:81] duration metric: took 14.58563ms waiting for pod "kube-apiserver-default-k8s-diff-port-172365" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:34.563207  378575 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-172365" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:34.572108  378575 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-172365" in "kube-system" namespace has status "Ready":"True"
	I1128 00:32:34.572174  378575 pod_ready.go:81] duration metric: took 8.923106ms waiting for pod "kube-controller-manager-default-k8s-diff-port-172365" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:34.572200  378575 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ct7pn" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:34.582480  378575 pod_ready.go:92] pod "kube-proxy-ct7pn" in "kube-system" namespace has status "Ready":"True"
	I1128 00:32:34.582548  378575 pod_ready.go:81] duration metric: took 10.32131ms waiting for pod "kube-proxy-ct7pn" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:34.582574  378575 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-172365" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:34.941800  378575 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-172365" in "kube-system" namespace has status "Ready":"True"
	I1128 00:32:34.941824  378575 pod_ready.go:81] duration metric: took 359.229069ms waiting for pod "kube-scheduler-default-k8s-diff-port-172365" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:34.941836  378575 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace to be "Ready" ...
	I1128 00:32:37.247172  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:39.247353  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:41.247584  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:43.248213  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:45.248312  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:47.749221  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:50.247124  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:52.247737  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:54.747354  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:56.750535  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:32:59.247324  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:01.248130  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:03.248252  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:05.748359  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:07.749260  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:10.248652  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:12.747332  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:14.748239  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:16.748884  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:19.247913  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:21.747856  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:23.748068  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:26.248172  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:28.248248  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:30.749124  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:33.247281  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:35.247656  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:37.248062  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:39.747933  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:41.748108  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:44.252411  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:46.748627  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:48.749030  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:51.248149  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:53.747654  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:56.247616  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:33:58.247740  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:00.248381  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:02.748188  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:05.248183  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:07.248287  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:09.748851  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:12.247787  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:14.748183  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:17.247909  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:19.248503  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:21.252442  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:23.747998  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:26.248705  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:28.748817  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:31.248439  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:33.248862  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:35.249728  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:37.749328  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:40.249148  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:42.747304  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:44.747940  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:47.247376  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:49.247678  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:51.748344  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:54.247428  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:56.748255  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:34:59.247258  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:01.747019  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:03.747705  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:05.748602  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:08.249047  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:10.747545  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:12.748911  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:15.247626  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:17.247735  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:19.748128  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:22.247205  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:24.248178  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:26.248610  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:28.749219  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:30.750010  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:33.247597  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:35.748802  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:37.748833  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:40.247427  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:42.248054  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:44.747349  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:46.747979  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:49.247845  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:51.747224  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:53.747616  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:56.247769  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:35:58.247963  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:00.747343  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:03.248004  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:05.746616  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:07.747062  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:10.247313  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:12.748069  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:15.247716  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:17.247744  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:19.248055  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:21.747056  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:23.747887  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:26.247385  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:28.747864  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:31.247393  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:33.247592  378575 pod_ready.go:102] pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace has status "Ready":"False"
	I1128 00:36:34.942531  378575 pod_ready.go:81] duration metric: took 4m0.000677758s waiting for pod "metrics-server-57f55c9bc5-6d6b9" in "kube-system" namespace to be "Ready" ...
	E1128 00:36:34.942567  378575 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 00:36:34.942585  378575 pod_ready.go:38] duration metric: took 4m38.456843678s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 00:36:34.942613  378575 kubeadm.go:640] restartCluster took 4m57.55423984s
	W1128 00:36:34.942694  378575 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 00:36:34.942720  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1128 00:39:06.070712  378575 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (2m31.127963816s)
	W1128 00:39:06.070750  378575 kubeadm.go:898] /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": <nil>
	I1128 00:39:06.070811  378575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	W1128 00:39:06.070838  378575 ssh_runner.go:129] session error, resetting client: EOF
	I1128 00:39:06.070856  378575 retry.go:31] will retry after 326.553657ms: EOF
	I1128 00:39:06.398449  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:06.416258  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:09.142689  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:41982->127.0.0.1:33088: read: connection reset by peer
	W1128 00:39:09.142768  378575 kubeadm.go:427] delete failed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": wait: remote command exited without exit status or exit signal
	stdout:
	[reset] Reading configuration from the cluster...
	[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	[preflight] Running pre-flight checks
	[reset] Deleted contents of the etcd data directory: /var/lib/minikube/etcd
	[reset] Stopping the kubelet service
	[reset] Unmounting mounted directories in "/var/lib/kubelet"
	
	stderr:
	W1128 00:36:34.983047    7040 resetconfiguration.go:49] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1128 00:39:09.142831  378575 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 00:39:09.142885  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:09.160750  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:12.214618  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:41992->127.0.0.1:33088: read: connection reset by peer
	I1128 00:39:12.214700  378575 kubeadm.go:406] StartCluster complete in 7m34.856527727s
	I1128 00:39:12.215399  378575 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 00:39:12.215470  378575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 00:39:12.215519  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:12.232773  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:15.286680  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:56194->127.0.0.1:33088: read: connection reset by peer
	E1128 00:39:15.286762  378575 logs.go:281] Failed to list containers for "kube-apiserver": crictl list: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56194->127.0.0.1:33088: read: connection reset by peer
	I1128 00:39:15.286783  378575 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 00:39:15.286841  378575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 00:39:15.286889  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:15.304164  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:18.358649  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:56208->127.0.0.1:33088: read: connection reset by peer
	E1128 00:39:18.358725  378575 logs.go:281] Failed to list containers for "etcd": crictl list: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56208->127.0.0.1:33088: read: connection reset by peer
	I1128 00:39:18.358748  378575 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 00:39:18.358808  378575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 00:39:18.358858  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:18.376086  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:21.430642  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:56220->127.0.0.1:33088: read: connection reset by peer
	E1128 00:39:21.430719  378575 logs.go:281] Failed to list containers for "coredns": crictl list: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56220->127.0.0.1:33088: read: connection reset by peer
	I1128 00:39:21.430738  378575 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 00:39:21.430802  378575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 00:39:21.430848  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:21.448212  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:24.502702  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:56232->127.0.0.1:33088: read: connection reset by peer
	E1128 00:39:24.502779  378575 logs.go:281] Failed to list containers for "kube-scheduler": crictl list: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56232->127.0.0.1:33088: read: connection reset by peer
	I1128 00:39:24.502798  378575 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 00:39:24.502861  378575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 00:39:24.502912  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:24.520464  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:27.574707  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43864->127.0.0.1:33088: read: connection reset by peer
	E1128 00:39:27.574782  378575 logs.go:281] Failed to list containers for "kube-proxy": crictl list: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43864->127.0.0.1:33088: read: connection reset by peer
	I1128 00:39:27.574801  378575 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 00:39:27.574865  378575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 00:39:27.574947  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:27.591716  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:30.646650  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43866->127.0.0.1:33088: read: connection reset by peer
	E1128 00:39:30.646727  378575 logs.go:281] Failed to list containers for "kube-controller-manager": crictl list: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43866->127.0.0.1:33088: read: connection reset by peer
	I1128 00:39:30.646747  378575 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 00:39:30.646807  378575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 00:39:30.646859  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:30.663883  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:33.718636  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43868->127.0.0.1:33088: read: connection reset by peer
	E1128 00:39:33.718712  378575 logs.go:281] Failed to list containers for "kindnet": crictl list: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43868->127.0.0.1:33088: read: connection reset by peer
	I1128 00:39:33.718729  378575 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 00:39:33.718786  378575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 00:39:33.718834  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:33.736886  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:36.790642  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60740->127.0.0.1:33088: read: connection reset by peer
	E1128 00:39:36.790726  378575 logs.go:281] Failed to list containers for "storage-provisioner": crictl list: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60740->127.0.0.1:33088: read: connection reset by peer
	I1128 00:39:36.790746  378575 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1128 00:39:36.790812  378575 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1128 00:39:36.790862  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:36.807748  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:39.862662  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60752->127.0.0.1:33088: read: connection reset by peer
	E1128 00:39:39.862755  378575 logs.go:281] Failed to list containers for "kubernetes-dashboard": crictl list: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60752->127.0.0.1:33088: read: connection reset by peer
	I1128 00:39:39.862776  378575 logs.go:123] Gathering logs for describe nodes ...
	I1128 00:39:39.862795  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 00:39:39.862865  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:39.881001  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:42.934677  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60758->127.0.0.1:33088: read: connection reset by peer
	W1128 00:39:42.934754  378575 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60758->127.0.0.1:33088: read: connection reset by peer output: 
	I1128 00:39:42.934772  378575 logs.go:123] Gathering logs for Docker ...
	I1128 00:39:42.934788  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1128 00:39:42.934860  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:42.956032  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:46.006655  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34252->127.0.0.1:33088: read: connection reset by peer
	W1128 00:39:46.006732  378575 logs.go:130] failed Docker: command: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400" NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:34252->127.0.0.1:33088: read: connection reset by peer output: 
	I1128 00:39:46.006750  378575 logs.go:123] Gathering logs for container status ...
	I1128 00:39:46.006763  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 00:39:46.006836  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:46.023730  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:49.078620  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34266->127.0.0.1:33088: read: connection reset by peer
	W1128 00:39:49.078693  378575 logs.go:130] failed container status: command: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:34266->127.0.0.1:33088: read: connection reset by peer output: 
	I1128 00:39:49.078711  378575 logs.go:123] Gathering logs for kubelet ...
	I1128 00:39:49.078723  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 00:39:49.078791  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:49.097166  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:52.150686  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:34282->127.0.0.1:33088: read: connection reset by peer
	W1128 00:39:52.150761  378575 logs.go:130] failed kubelet: command: /bin/bash -c "sudo journalctl -u kubelet -n 400" NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:34282->127.0.0.1:33088: read: connection reset by peer output: 
	I1128 00:39:52.150777  378575 logs.go:123] Gathering logs for dmesg ...
	I1128 00:39:52.150789  378575 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 00:39:52.150864  378575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:39:52.168133  378575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:39:55.222657  378575 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:47288->127.0.0.1:33088: read: connection reset by peer
	W1128 00:39:55.222736  378575 logs.go:130] failed dmesg: command: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47288->127.0.0.1:33088: read: connection reset by peer output: 
	W1128 00:39:55.222758  378575 out.go:369] Error starting cluster: cp: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:41992->127.0.0.1:33088: read: connection reset by peer
	W1128 00:39:55.222780  378575 out.go:239] * 
	* 
	W1128 00:39:55.222829  378575 out.go:239] X Error starting cluster: cp: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:41992->127.0.0.1:33088: read: connection reset by peer
	X Error starting cluster: cp: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:41992->127.0.0.1:33088: read: connection reset by peer
	W1128 00:39:55.222841  378575 out.go:239] * 
	* 
	W1128 00:39:55.223855  378575 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 00:39:55.226649  378575 out.go:177] 
	W1128 00:39:55.229634  378575 out.go:239] X Exiting due to GUEST_START: failed to start node: cp: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:41992->127.0.0.1:33088: read: connection reset by peer
	X Exiting due to GUEST_START: failed to start node: cp: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:41992->127.0.0.1:33088: read: connection reset by peer
	W1128 00:39:55.229650  378575 out.go:239] * 
	* 
	W1128 00:39:55.230638  378575 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 00:39:55.233182  378575 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p default-k8s-diff-port-172365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-172365
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-172365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261",
	        "Created": "2023-11-28T00:29:46.397673041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 378765,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T00:31:29.890726479Z",
	            "FinishedAt": "2023-11-28T00:31:28.766003125Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hostname",
	        "HostsPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hosts",
	        "LogPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261-json.log",
	        "Name": "/default-k8s-diff-port-172365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-172365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-172365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388-init/diff:/var/lib/docker/overlay2/150860364e38eff4c019c0f0be35917511a7a9583a74bcbfa76fc06522b5265f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/merged",
	                "UpperDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/diff",
	                "WorkDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-172365",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-172365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-172365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e96c12a95771c011e7355bb03f7d824b67f5102abdf4f387627f1451ac81c26e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e96c12a95771",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-172365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "449309480d62",
	                        "default-k8s-diff-port-172365"
	                    ],
	                    "NetworkID": "96b80b52da08f1aa76e6880f9d2729ab961d1f35bb5b02e1e966c7e67885c240",
	                    "EndpointID": "7168e7c6350adfd9bbb1012fbf483698dc0401739424611b31a52b1755bd43fc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365
E1128 00:39:57.442038    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365: exit status 3 (2.993738613s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:39:58.294732  406098 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47292->127.0.0.1:33088: read: connection reset by peer
	E1128 00:39:58.294754  406098 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47292->127.0.0.1:33088: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-172365" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (509.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (547.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 00:39:59.151786    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:40:00.535468    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:40:22.140698    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:41:11.466731    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:41:21.163090    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:41:37.725313    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:42:01.413789    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:42:10.027720    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:42:10.501032    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:42:15.820131    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:43:00.392084    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:43:17.075586    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:43:34.398469    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:43:37.490994    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:44:07.704310    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:44:13.484973    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:44:59.151782    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:45:22.140454    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:45:30.751666    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:46:11.465935    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:46:21.163242    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:46:37.724925    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:46:45.184262    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:47:01.413596    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:47:10.027477    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1128 00:47:10.501356    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:47:15.820293    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:48:00.392163    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:48:00.769757    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:48:17.075541    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:48:33.075690    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:48:34.398756    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:48:37.490707    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365: exit status 3 (3.964477007s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:49:02.262720  407160 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:58158->127.0.0.1:33088: read: connection reset by peer
	E1128 00:49:02.262736  407160 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:58158->127.0.0.1:33088: read: connection reset by peer

                                                
                                                
** /stderr **
start_stop_delete_test.go:274: status error: exit status 3 (may be ok)
start_stop_delete_test.go:274: "default-k8s-diff-port-172365" apiserver is not running, skipping kubectl commands (state="Nonexistent")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-172365
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-172365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261",
	        "Created": "2023-11-28T00:29:46.397673041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 378765,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T00:31:29.890726479Z",
	            "FinishedAt": "2023-11-28T00:31:28.766003125Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hostname",
	        "HostsPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hosts",
	        "LogPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261-json.log",
	        "Name": "/default-k8s-diff-port-172365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-172365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-172365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388-init/diff:/var/lib/docker/overlay2/150860364e38eff4c019c0f0be35917511a7a9583a74bcbfa76fc06522b5265f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/merged",
	                "UpperDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/diff",
	                "WorkDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-172365",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-172365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-172365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e96c12a95771c011e7355bb03f7d824b67f5102abdf4f387627f1451ac81c26e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e96c12a95771",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-172365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "449309480d62",
	                        "default-k8s-diff-port-172365"
	                    ],
	                    "NetworkID": "96b80b52da08f1aa76e6880f9d2729ab961d1f35bb5b02e1e966c7e67885c240",
	                    "EndpointID": "7168e7c6350adfd9bbb1012fbf483698dc0401739424611b31a52b1755bd43fc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365: exit status 3 (3.053300645s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:49:05.334733  407187 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40706->127.0.0.1:33088: read: connection reset by peer
	E1128 00:49:05.334768  407187 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:40706->127.0.0.1:33088: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-172365" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (547.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (550.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 00:49:07.704481    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:49:13.484896    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:49:14.513690    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:49:59.152110    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:50:13.547665    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:50:22.140772    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:51:11.465868    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:51:20.125853    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:51:21.163277    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:51:37.725355    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:51:58.866093    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:52:01.413811    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:52:10.028170    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1128 00:52:10.501121    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:52:15.820740    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:52:16.534694    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:53:00.392673    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:53:02.194043    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:53:17.075383    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:53:34.398437    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:53:37.491022    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:54:07.704265    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:54:13.485009    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:54:24.207569    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:54:59.151781    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:55:04.460576    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:55:22.140355    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:56:03.442177    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:56:11.466258    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:56:21.163237    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:56:37.442277    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:56:37.725086    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:56:40.536615    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:57:01.413470    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:57:10.027564    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1128 00:57:10.501389    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:57:15.819810    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
E1128 00:58:00.392155    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8444: connect: no route to host
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365: exit status 3 (3.902783677s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:58:09.238705  408240 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56306->127.0.0.1:33088: read: connection reset by peer
	E1128 00:58:09.238724  408240 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56306->127.0.0.1:33088: read: connection reset by peer

                                                
                                                
** /stderr **
start_stop_delete_test.go:287: status error: exit status 3 (may be ok)
start_stop_delete_test.go:287: "default-k8s-diff-port-172365" apiserver is not running, skipping kubectl commands (state="Nonexistent")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-172365 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-172365 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (3.067945128s)

                                                
                                                
** stderr ** 
	Unable to connect to the server: dial tcp 192.168.76.2:8444: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-172365 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-172365
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-172365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261",
	        "Created": "2023-11-28T00:29:46.397673041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 378765,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T00:31:29.890726479Z",
	            "FinishedAt": "2023-11-28T00:31:28.766003125Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hostname",
	        "HostsPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hosts",
	        "LogPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261-json.log",
	        "Name": "/default-k8s-diff-port-172365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-172365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-172365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388-init/diff:/var/lib/docker/overlay2/150860364e38eff4c019c0f0be35917511a7a9583a74bcbfa76fc06522b5265f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/merged",
	                "UpperDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/diff",
	                "WorkDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-172365",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-172365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-172365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e96c12a95771c011e7355bb03f7d824b67f5102abdf4f387627f1451ac81c26e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e96c12a95771",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-172365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "449309480d62",
	                        "default-k8s-diff-port-172365"
	                    ],
	                    "NetworkID": "96b80b52da08f1aa76e6880f9d2729ab961d1f35bb5b02e1e966c7e67885c240",
	                    "EndpointID": "7168e7c6350adfd9bbb1012fbf483698dc0401739424611b31a52b1755bd43fc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365: exit status 3 (3.056879376s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:58:15.382692  408281 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:57164->127.0.0.1:33088: read: connection reset by peer
	E1128 00:58:15.382713  408281 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:57164->127.0.0.1:33088: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-172365" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (550.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-172365 "sudo crictl images -o json"
E1128 00:58:17.075670    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p default-k8s-diff-port-172365 "sudo crictl images -o json": exit status 1 (3.072450942s)

                                                
                                                
** stderr ** 
	ssh: ssh: handshake failed: read tcp 127.0.0.1:57176->127.0.0.1:33088: read: connection reset by peer

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-arm64 ssh -p default-k8s-diff-port-172365 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-172365
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-172365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261",
	        "Created": "2023-11-28T00:29:46.397673041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 378765,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T00:31:29.890726479Z",
	            "FinishedAt": "2023-11-28T00:31:28.766003125Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hostname",
	        "HostsPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hosts",
	        "LogPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261-json.log",
	        "Name": "/default-k8s-diff-port-172365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-172365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-172365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388-init/diff:/var/lib/docker/overlay2/150860364e38eff4c019c0f0be35917511a7a9583a74bcbfa76fc06522b5265f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/merged",
	                "UpperDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/diff",
	                "WorkDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-172365",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-172365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-172365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e96c12a95771c011e7355bb03f7d824b67f5102abdf4f387627f1451ac81c26e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e96c12a95771",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-172365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "449309480d62",
	                        "default-k8s-diff-port-172365"
	                    ],
	                    "NetworkID": "96b80b52da08f1aa76e6880f9d2729ab961d1f35bb5b02e1e966c7e67885c240",
	                    "EndpointID": "7168e7c6350adfd9bbb1012fbf483698dc0401739424611b31a52b1755bd43fc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365: exit status 3 (3.049038365s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:58:21.526720  408373 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:57190->127.0.0.1:33088: read: connection reset by peer
	E1128 00:58:21.526743  408373 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:57190->127.0.0.1:33088: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-172365" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (6.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (15.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-172365 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-172365 --alsologtostderr -v=1: exit status 80 (9.358794599s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-172365 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 00:58:21.589925  408395 out.go:296] Setting OutFile to fd 1 ...
	I1128 00:58:21.590143  408395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:58:21.590169  408395 out.go:309] Setting ErrFile to fd 2...
	I1128 00:58:21.590189  408395 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 00:58:21.590509  408395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
	I1128 00:58:21.590827  408395 out.go:303] Setting JSON to false
	I1128 00:58:21.590932  408395 mustload.go:65] Loading cluster: default-k8s-diff-port-172365
	I1128 00:58:21.591364  408395 config.go:182] Loaded profile config "default-k8s-diff-port-172365": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1128 00:58:21.591918  408395 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-172365 --format={{.State.Status}}
	I1128 00:58:21.611606  408395 host.go:66] Checking if "default-k8s-diff-port-172365" exists ...
	I1128 00:58:21.611920  408395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1128 00:58:21.681758  408395 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-28 00:58:21.672436743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1128 00:58:21.682478  408395 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/17206/minikube-v1.32.1-1701107474-17206-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.32.1-1701107474-17206/minikube-v1.32.1-1701107474-17206-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.32.1-1701107474-17206-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: m
axauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/home/jenkins:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-172365 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1128 00:58:21.685116  408395 out.go:177] * Pausing node default-k8s-diff-port-172365 ... 
	I1128 00:58:21.687924  408395 host.go:66] Checking if "default-k8s-diff-port-172365" exists ...
	I1128 00:58:21.688257  408395 ssh_runner.go:195] Run: systemctl --version
	I1128 00:58:21.688300  408395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:58:21.704960  408395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:58:24.598615  408395 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:57198->127.0.0.1:33088: read: connection reset by peer
	I1128 00:58:24.598800  408395 ssh_runner.go:195] Run: sudo service kubelet status
	I1128 00:58:24.598847  408395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:58:24.616308  408395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:58:27.670613  408395 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:36558->127.0.0.1:33088: read: connection reset by peer
	I1128 00:58:27.670692  408395 pause.go:51] kubelet running: false
	I1128 00:58:27.670765  408395 ssh_runner.go:195] Run: sudo service kubelet stop
	I1128 00:58:27.670818  408395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-172365
	I1128 00:58:27.687721  408395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/default-k8s-diff-port-172365/id_rsa Username:docker}
	W1128 00:58:30.742625  408395 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:36562->127.0.0.1:33088: read: connection reset by peer
	I1128 00:58:30.744861  408395 out.go:177] 
	W1128 00:58:30.746474  408395 out.go:239] X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:36562->127.0.0.1:33088: read: connection reset by peer
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:36562->127.0.0.1:33088: read: connection reset by peer
	W1128 00:58:30.746494  408395 out.go:239] * 
	* 
	W1128 00:58:30.880316  408395 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 00:58:30.882851  408395 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-linux-arm64 pause -p default-k8s-diff-port-172365 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-172365
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-172365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261",
	        "Created": "2023-11-28T00:29:46.397673041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 378765,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T00:31:29.890726479Z",
	            "FinishedAt": "2023-11-28T00:31:28.766003125Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hostname",
	        "HostsPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hosts",
	        "LogPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261-json.log",
	        "Name": "/default-k8s-diff-port-172365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-172365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-172365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388-init/diff:/var/lib/docker/overlay2/150860364e38eff4c019c0f0be35917511a7a9583a74bcbfa76fc06522b5265f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/merged",
	                "UpperDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/diff",
	                "WorkDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-172365",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-172365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-172365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e96c12a95771c011e7355bb03f7d824b67f5102abdf4f387627f1451ac81c26e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e96c12a95771",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-172365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "449309480d62",
	                        "default-k8s-diff-port-172365"
	                    ],
	                    "NetworkID": "96b80b52da08f1aa76e6880f9d2729ab961d1f35bb5b02e1e966c7e67885c240",
	                    "EndpointID": "7168e7c6350adfd9bbb1012fbf483698dc0401739424611b31a52b1755bd43fc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365: exit status 3 (2.910248139s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:58:33.814743  408474 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:36578->127.0.0.1:33088: read: connection reset by peer
	E1128 00:58:33.814764  408474 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:36578->127.0.0.1:33088: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-172365" host is not running, skipping log retrieval (state="Error")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-172365
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-172365:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261",
	        "Created": "2023-11-28T00:29:46.397673041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 378765,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-28T00:31:29.890726479Z",
	            "FinishedAt": "2023-11-28T00:31:28.766003125Z"
	        },
	        "Image": "sha256:6b2707af759835616662ab4511fa7cfc968ed5500b8f30a5d231d9af64582310",
	        "ResolvConfPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hostname",
	        "HostsPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/hosts",
	        "LogPath": "/var/lib/docker/containers/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261/449309480d62fcd03e28211fecf49d9f375e725204a762cd248e61957532d261-json.log",
	        "Name": "/default-k8s-diff-port-172365",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-172365:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-172365",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388-init/diff:/var/lib/docker/overlay2/150860364e38eff4c019c0f0be35917511a7a9583a74bcbfa76fc06522b5265f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/merged",
	                "UpperDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/diff",
	                "WorkDir": "/var/lib/docker/overlay2/005addcad21b404834e4864e6901f6fe227cecfc3a9b1e347a8353375e4f2388/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-172365",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-172365/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-172365",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-172365",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e96c12a95771c011e7355bb03f7d824b67f5102abdf4f387627f1451ac81c26e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e96c12a95771",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-172365": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "449309480d62",
	                        "default-k8s-diff-port-172365"
	                    ],
	                    "NetworkID": "96b80b52da08f1aa76e6880f9d2729ab961d1f35bb5b02e1e966c7e67885c240",
	                    "EndpointID": "7168e7c6350adfd9bbb1012fbf483698dc0401739424611b31a52b1755bd43fc",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365
E1128 00:58:34.398066    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365: exit status 3 (3.053900331s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 00:58:36.886690  408510 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:49060->127.0.0.1:33088: read: connection reset by peer
	E1128 00:58:36.886712  408510 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:49060->127.0.0.1:33088: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-172365" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (15.36s)

                                                
                                    

Test pass (295/329)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.64
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.4/json-events 12.33
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.0/json-events 14.98
18 TestDownloadOnly/v1.29.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.0/LogsDuration 0.36
23 TestDownloadOnly/DeleteAll 0.32
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
26 TestBinaryMirror 0.6
27 TestOffline 64.95
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
32 TestAddons/Setup 142.46
34 TestAddons/parallel/Registry 14.54
36 TestAddons/parallel/InspektorGadget 10.74
37 TestAddons/parallel/MetricsServer 5.86
40 TestAddons/parallel/CSI 49.51
41 TestAddons/parallel/Headlamp 11.09
42 TestAddons/parallel/CloudSpanner 5.5
43 TestAddons/parallel/LocalPath 51.82
44 TestAddons/parallel/NvidiaDevicePlugin 5.48
47 TestAddons/serial/GCPAuth/Namespaces 0.18
48 TestAddons/StoppedEnableDisable 11.22
49 TestCertOptions 43.3
50 TestCertExpiration 245.72
51 TestDockerFlags 46.1
52 TestForceSystemdFlag 43.32
53 TestForceSystemdEnv 45.38
59 TestErrorSpam/setup 33.05
60 TestErrorSpam/start 0.84
61 TestErrorSpam/status 1.07
62 TestErrorSpam/pause 1.38
63 TestErrorSpam/unpause 1.51
64 TestErrorSpam/stop 2.12
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 88.13
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 37.06
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.1
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.98
76 TestFunctional/serial/CacheCmd/cache/add_local 0.94
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
81 TestFunctional/serial/CacheCmd/cache/delete 0.13
82 TestFunctional/serial/MinikubeKubectlCmd 0.17
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
84 TestFunctional/serial/ExtraConfig 43.11
85 TestFunctional/serial/ComponentHealth 0.1
86 TestFunctional/serial/LogsCmd 1.25
87 TestFunctional/serial/LogsFileCmd 1.29
88 TestFunctional/serial/InvalidService 4.64
90 TestFunctional/parallel/ConfigCmd 0.55
91 TestFunctional/parallel/DashboardCmd 11.66
92 TestFunctional/parallel/DryRun 0.65
93 TestFunctional/parallel/InternationalLanguage 0.33
94 TestFunctional/parallel/StatusCmd 1.13
98 TestFunctional/parallel/ServiceCmdConnect 7.66
99 TestFunctional/parallel/AddonsCmd 0.25
100 TestFunctional/parallel/PersistentVolumeClaim 28.55
102 TestFunctional/parallel/SSHCmd 0.85
103 TestFunctional/parallel/CpCmd 1.54
105 TestFunctional/parallel/FileSync 0.37
106 TestFunctional/parallel/CertSync 2.34
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
114 TestFunctional/parallel/License 0.46
115 TestFunctional/parallel/Version/short 0.11
116 TestFunctional/parallel/Version/components 1.22
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.49
122 TestFunctional/parallel/ImageCommands/Setup 1.88
123 TestFunctional/parallel/DockerEnv/bash 1.33
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.03
128 TestFunctional/parallel/ServiceCmd/DeployApp 12.3
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.91
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.25
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.85
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.21
134 TestFunctional/parallel/ServiceCmd/List 0.46
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.2
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.41
137 TestFunctional/parallel/ServiceCmd/HTTPS 0.63
138 TestFunctional/parallel/ServiceCmd/Format 0.56
139 TestFunctional/parallel/ServiceCmd/URL 0.71
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.42
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
152 TestFunctional/parallel/ProfileCmd/profile_list 0.44
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.58
154 TestFunctional/parallel/MountCmd/any-port 7.16
155 TestFunctional/parallel/MountCmd/specific-port 2.6
156 TestFunctional/parallel/MountCmd/VerifyCleanup 3.03
157 TestFunctional/delete_addon-resizer_images 0.08
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
163 TestImageBuild/serial/Setup 34.3
164 TestImageBuild/serial/NormalBuild 1.91
165 TestImageBuild/serial/BuildWithBuildArg 0.92
166 TestImageBuild/serial/BuildWithDockerIgnore 0.73
167 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.72
170 TestIngressAddonLegacy/StartLegacyK8sCluster 71.14
172 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.05
173 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.66
177 TestJSONOutput/start/Command 45.65
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 0.61
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 0.54
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 7.87
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 0.24
202 TestKicCustomNetwork/create_custom_network 32.88
203 TestKicCustomNetwork/use_default_bridge_network 32.28
204 TestKicExistingNetwork 32.77
205 TestKicCustomSubnet 36.35
206 TestKicStaticIP 33.22
207 TestMainNoArgs 0.06
208 TestMinikubeProfile 71.45
211 TestMountStart/serial/StartWithMountFirst 7.43
212 TestMountStart/serial/VerifyMountFirst 0.29
213 TestMountStart/serial/StartWithMountSecond 10.43
214 TestMountStart/serial/VerifyMountSecond 0.28
215 TestMountStart/serial/DeleteFirst 1.49
216 TestMountStart/serial/VerifyMountPostDelete 0.28
217 TestMountStart/serial/Stop 1.22
218 TestMountStart/serial/RestartStopped 8.16
219 TestMountStart/serial/VerifyMountPostStop 0.28
222 TestMultiNode/serial/FreshStart2Nodes 81.1
223 TestMultiNode/serial/DeployApp2Nodes 47.91
224 TestMultiNode/serial/PingHostFrom2Pods 1.15
225 TestMultiNode/serial/AddNode 20.5
226 TestMultiNode/serial/ProfileList 0.37
227 TestMultiNode/serial/CopyFile 10.87
228 TestMultiNode/serial/StopNode 2.33
229 TestMultiNode/serial/StartAfterStop 13.45
230 TestMultiNode/serial/RestartKeepsNodes 120.89
231 TestMultiNode/serial/DeleteNode 5.2
232 TestMultiNode/serial/StopMultiNode 21.7
233 TestMultiNode/serial/RestartMultiNode 86
234 TestMultiNode/serial/ValidateNameConflict 37.67
239 TestPreload 172.74
241 TestScheduledStopUnix 106.02
242 TestSkaffold 103.31
244 TestInsufficientStorage 10.63
245 TestRunningBinaryUpgrade 111.63
247 TestKubernetesUpgrade 414.62
248 TestMissingContainerUpgrade 195.88
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.13
251 TestNoKubernetes/serial/StartWithK8s 42.67
252 TestNoKubernetes/serial/StartWithStopK8s 8.2
253 TestNoKubernetes/serial/Start 8.28
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.45
255 TestNoKubernetes/serial/ProfileList 1.57
256 TestNoKubernetes/serial/Stop 1.32
257 TestNoKubernetes/serial/StartNoArgs 8.66
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.41
270 TestStoppedBinaryUpgrade/Setup 1.69
271 TestStoppedBinaryUpgrade/Upgrade 149.61
272 TestStoppedBinaryUpgrade/MinikubeLogs 2.13
281 TestPause/serial/Start 57.73
282 TestNetworkPlugins/group/auto/Start 78.67
283 TestPause/serial/SecondStartNoReconfiguration 38.03
284 TestPause/serial/Pause 0.67
285 TestPause/serial/VerifyStatus 0.36
286 TestPause/serial/Unpause 0.58
287 TestPause/serial/PauseAgain 0.78
288 TestPause/serial/DeletePaused 2.36
289 TestNetworkPlugins/group/auto/KubeletFlags 0.31
290 TestNetworkPlugins/group/auto/NetCatPod 13.42
291 TestPause/serial/VerifyDeletedResources 0.43
292 TestNetworkPlugins/group/kindnet/Start 57.52
293 TestNetworkPlugins/group/auto/DNS 0.29
294 TestNetworkPlugins/group/auto/Localhost 0.2
295 TestNetworkPlugins/group/auto/HairPin 0.27
296 TestNetworkPlugins/group/calico/Start 85.03
297 TestNetworkPlugins/group/kindnet/ControllerPod 7.04
298 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
299 TestNetworkPlugins/group/kindnet/NetCatPod 15.52
300 TestNetworkPlugins/group/kindnet/DNS 0.22
301 TestNetworkPlugins/group/kindnet/Localhost 0.21
302 TestNetworkPlugins/group/kindnet/HairPin 0.18
303 TestNetworkPlugins/group/custom-flannel/Start 69.06
304 TestNetworkPlugins/group/calico/ControllerPod 5.04
305 TestNetworkPlugins/group/calico/KubeletFlags 0.43
306 TestNetworkPlugins/group/calico/NetCatPod 11.62
307 TestNetworkPlugins/group/calico/DNS 0.43
308 TestNetworkPlugins/group/calico/Localhost 0.33
309 TestNetworkPlugins/group/calico/HairPin 0.32
310 TestNetworkPlugins/group/false/Start 56.51
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.4
313 TestNetworkPlugins/group/custom-flannel/DNS 0.25
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
316 TestNetworkPlugins/group/enable-default-cni/Start 89.69
317 TestNetworkPlugins/group/false/KubeletFlags 0.31
318 TestNetworkPlugins/group/false/NetCatPod 10.35
319 TestNetworkPlugins/group/false/DNS 26.53
320 TestNetworkPlugins/group/false/Localhost 0.19
321 TestNetworkPlugins/group/false/HairPin 0.26
322 TestNetworkPlugins/group/flannel/Start 63.56
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.46
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.61
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
328 TestNetworkPlugins/group/bridge/Start 93.5
329 TestNetworkPlugins/group/flannel/ControllerPod 5.04
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.51
331 TestNetworkPlugins/group/flannel/NetCatPod 11.38
332 TestNetworkPlugins/group/flannel/DNS 0.28
333 TestNetworkPlugins/group/flannel/Localhost 0.2
334 TestNetworkPlugins/group/flannel/HairPin 0.23
335 TestNetworkPlugins/group/kubenet/Start 50.36
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.49
337 TestNetworkPlugins/group/bridge/NetCatPod 13.49
338 TestNetworkPlugins/group/kubenet/KubeletFlags 0.34
339 TestNetworkPlugins/group/kubenet/NetCatPod 9.47
340 TestNetworkPlugins/group/kubenet/DNS 0.22
341 TestNetworkPlugins/group/kubenet/Localhost 0.17
342 TestNetworkPlugins/group/kubenet/HairPin 0.18
343 TestNetworkPlugins/group/bridge/DNS 0.26
344 TestNetworkPlugins/group/bridge/Localhost 0.2
345 TestNetworkPlugins/group/bridge/HairPin 0.19
347 TestStartStop/group/old-k8s-version/serial/FirstStart 144.18
349 TestStartStop/group/no-preload/serial/FirstStart 66.77
350 TestStartStop/group/no-preload/serial/DeployApp 10.03
351 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.08
352 TestStartStop/group/no-preload/serial/Stop 10.97
353 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
354 TestStartStop/group/no-preload/serial/SecondStart 340.59
355 TestStartStop/group/old-k8s-version/serial/DeployApp 9.47
356 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.95
357 TestStartStop/group/old-k8s-version/serial/Stop 11
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
359 TestStartStop/group/old-k8s-version/serial/SecondStart 445.4
360 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.03
361 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
362 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.42
363 TestStartStop/group/no-preload/serial/Pause 3.07
365 TestStartStop/group/embed-certs/serial/FirstStart 46.75
366 TestStartStop/group/embed-certs/serial/DeployApp 10.7
367 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.41
368 TestStartStop/group/embed-certs/serial/Stop 10.99
369 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
370 TestStartStop/group/embed-certs/serial/SecondStart 344.11
371 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
372 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
373 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
374 TestStartStop/group/old-k8s-version/serial/Pause 3.11
376 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 86.42
377 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.47
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
379 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.95
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.04
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
385 TestStartStop/group/embed-certs/serial/Pause 3.09
387 TestStartStop/group/newest-cni/serial/FirstStart 48.95
388 TestStartStop/group/newest-cni/serial/DeployApp 0
389 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.25
390 TestStartStop/group/newest-cni/serial/Stop 5.76
391 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
392 TestStartStop/group/newest-cni/serial/SecondStart 31.86
393 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
394 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
395 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
396 TestStartStop/group/newest-cni/serial/Pause 2.98
x
+
TestDownloadOnly/v1.16.0/json-events (17.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-602899 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-602899 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (17.641750874s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-602899
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-602899: exit status 85 (86.22318ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-602899 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-602899        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:24:50
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:24:50.447618    7465 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:24:50.447836    7465 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:24:50.447862    7465 out.go:309] Setting ErrFile to fd 2...
	I1127 23:24:50.447882    7465 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:24:50.448146    7465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
	W1127 23:24:50.448329    7465 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-2172/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-2172/.minikube/config/config.json: no such file or directory
	I1127 23:24:50.448799    7465 out.go:303] Setting JSON to true
	I1127 23:24:50.449579    7465 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":440,"bootTime":1701127051,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1127 23:24:50.449668    7465 start.go:138] virtualization:  
	I1127 23:24:50.453219    7465 out.go:97] [download-only-602899] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:24:50.455812    7465 out.go:169] MINIKUBE_LOCATION=17206
	W1127 23:24:50.453457    7465 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball: no such file or directory
	I1127 23:24:50.453511    7465 notify.go:220] Checking for updates...
	I1127 23:24:50.459927    7465 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:24:50.462129    7465 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	I1127 23:24:50.464241    7465 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	I1127 23:24:50.466489    7465 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1127 23:24:50.471355    7465 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:24:50.471579    7465 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:24:50.494415    7465 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:24:50.494510    7465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:24:50.869174    7465 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-27 23:24:50.859316016 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:24:50.869282    7465 docker.go:295] overlay module found
	I1127 23:24:50.871953    7465 out.go:97] Using the docker driver based on user configuration
	I1127 23:24:50.871975    7465 start.go:298] selected driver: docker
	I1127 23:24:50.871981    7465 start.go:902] validating driver "docker" against <nil>
	I1127 23:24:50.872082    7465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:24:50.943866    7465 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-27 23:24:50.93519565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:24:50.944019    7465 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 23:24:50.944273    7465 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1127 23:24:50.944438    7465 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1127 23:24:50.947184    7465 out.go:169] Using Docker driver with root privileges
	I1127 23:24:50.949339    7465 cni.go:84] Creating CNI manager for ""
	I1127 23:24:50.949359    7465 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1127 23:24:50.949375    7465 start_flags.go:323] config:
	{Name:download-only-602899 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-602899 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:24:50.951414    7465 out.go:97] Starting control plane node download-only-602899 in cluster download-only-602899
	I1127 23:24:50.951429    7465 cache.go:121] Beginning downloading kic base image for docker with docker
	I1127 23:24:50.953323    7465 out.go:97] Pulling base image ...
	I1127 23:24:50.953340    7465 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1127 23:24:50.953431    7465 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:24:50.969561    7465 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:24:50.969727    7465 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:24:50.969824    7465 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:24:51.022880    7465 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1127 23:24:51.022905    7465 cache.go:56] Caching tarball of preloaded images
	I1127 23:24:51.023032    7465 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1127 23:24:51.025661    7465 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1127 23:24:51.025677    7465 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1127 23:24:51.134787    7465 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1127 23:24:56.949912    7465 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 23:25:03.675745    7465 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1127 23:25:03.675859    7465 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1127 23:25:04.585880    7465 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1127 23:25:04.586258    7465 profile.go:148] Saving config to /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/download-only-602899/config.json ...
	I1127 23:25:04.586287    7465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/download-only-602899/config.json: {Name:mk40925645b937c762375149021bb663926f4ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 23:25:04.586476    7465 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1127 23:25:04.586662    7465 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17206-2172/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-602899"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (12.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-602899 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-602899 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.328163547s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (12.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-602899
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-602899: exit status 85 (82.314623ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-602899 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-602899        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-602899 | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |          |
	|         | -p download-only-602899        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:25:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:25:08.180914    7541 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:25:08.181128    7541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:25:08.181166    7541 out.go:309] Setting ErrFile to fd 2...
	I1127 23:25:08.181187    7541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:25:08.181450    7541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
	W1127 23:25:08.181615    7541 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-2172/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-2172/.minikube/config/config.json: no such file or directory
	I1127 23:25:08.181865    7541 out.go:303] Setting JSON to true
	I1127 23:25:08.182606    7541 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":458,"bootTime":1701127051,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1127 23:25:08.182689    7541 start.go:138] virtualization:  
	I1127 23:25:08.185102    7541 out.go:97] [download-only-602899] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:25:08.187304    7541 out.go:169] MINIKUBE_LOCATION=17206
	I1127 23:25:08.185397    7541 notify.go:220] Checking for updates...
	I1127 23:25:08.189228    7541 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:25:08.191596    7541 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	I1127 23:25:08.193744    7541 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	I1127 23:25:08.195446    7541 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1127 23:25:08.199206    7541 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:25:08.199707    7541 config.go:182] Loaded profile config "download-only-602899": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1127 23:25:08.199767    7541 start.go:810] api.Load failed for download-only-602899: filestore "download-only-602899": Docker machine "download-only-602899" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:25:08.199863    7541 driver.go:378] Setting default libvirt URI to qemu:///system
	W1127 23:25:08.199890    7541 start.go:810] api.Load failed for download-only-602899: filestore "download-only-602899": Docker machine "download-only-602899" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:25:08.224059    7541 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:25:08.224165    7541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:25:08.316904    7541 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-27 23:25:08.308072427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:25:08.317018    7541 docker.go:295] overlay module found
	I1127 23:25:08.319068    7541 out.go:97] Using the docker driver based on existing profile
	I1127 23:25:08.319089    7541 start.go:298] selected driver: docker
	I1127 23:25:08.319095    7541 start.go:902] validating driver "docker" against &{Name:download-only-602899 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-602899 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:25:08.319248    7541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:25:08.385820    7541 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-27 23:25:08.377182558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:25:08.386245    7541 cni.go:84] Creating CNI manager for ""
	I1127 23:25:08.386267    7541 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1127 23:25:08.386285    7541 start_flags.go:323] config:
	{Name:download-only-602899 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-602899 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GP
Us:}
	I1127 23:25:08.388599    7541 out.go:97] Starting control plane node download-only-602899 in cluster download-only-602899
	I1127 23:25:08.388622    7541 cache.go:121] Beginning downloading kic base image for docker with docker
	I1127 23:25:08.390610    7541 out.go:97] Pulling base image ...
	I1127 23:25:08.390629    7541 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1127 23:25:08.390779    7541 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:25:08.406880    7541 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:25:08.407002    7541 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:25:08.407021    7541 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1127 23:25:08.407029    7541 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1127 23:25:08.407037    7541 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 23:25:08.458020    7541 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1127 23:25:08.458046    7541 cache.go:56] Caching tarball of preloaded images
	I1127 23:25:08.458186    7541 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1127 23:25:08.460145    7541 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1127 23:25:08.460164    7541 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1127 23:25:08.574320    7541 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-602899"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/json-events (14.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-602899 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-602899 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (14.984150631s)
--- PASS: TestDownloadOnly/v1.29.0-rc.0/json-events (14.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-602899
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-602899: exit status 85 (362.041667ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-602899 | jenkins | v1.32.0 | 27 Nov 23 23:24 UTC |          |
	|         | -p download-only-602899           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-602899 | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |          |
	|         | -p download-only-602899           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-602899 | jenkins | v1.32.0 | 27 Nov 23 23:25 UTC |          |
	|         | -p download-only-602899           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 23:25:20
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.4 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 23:25:20.593704    7615 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:25:20.593857    7615 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:25:20.593865    7615 out.go:309] Setting ErrFile to fd 2...
	I1127 23:25:20.593871    7615 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:25:20.594127    7615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
	W1127 23:25:20.594236    7615 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17206-2172/.minikube/config/config.json: open /home/jenkins/minikube-integration/17206-2172/.minikube/config/config.json: no such file or directory
	I1127 23:25:20.594535    7615 out.go:303] Setting JSON to true
	I1127 23:25:20.595237    7615 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":470,"bootTime":1701127051,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1127 23:25:20.595300    7615 start.go:138] virtualization:  
	I1127 23:25:20.597605    7615 out.go:97] [download-only-602899] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:25:20.599951    7615 out.go:169] MINIKUBE_LOCATION=17206
	I1127 23:25:20.597869    7615 notify.go:220] Checking for updates...
	I1127 23:25:20.603488    7615 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:25:20.605340    7615 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	I1127 23:25:20.607025    7615 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	I1127 23:25:20.608802    7615 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1127 23:25:20.612648    7615 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 23:25:20.613146    7615 config.go:182] Loaded profile config "download-only-602899": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1127 23:25:20.613201    7615 start.go:810] api.Load failed for download-only-602899: filestore "download-only-602899": Docker machine "download-only-602899" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:25:20.613292    7615 driver.go:378] Setting default libvirt URI to qemu:///system
	W1127 23:25:20.613318    7615 start.go:810] api.Load failed for download-only-602899: filestore "download-only-602899": Docker machine "download-only-602899" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1127 23:25:20.636316    7615 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:25:20.636409    7615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:25:20.720079    7615 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-27 23:25:20.710748726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:25:20.720171    7615 docker.go:295] overlay module found
	I1127 23:25:20.722158    7615 out.go:97] Using the docker driver based on existing profile
	I1127 23:25:20.722176    7615 start.go:298] selected driver: docker
	I1127 23:25:20.722181    7615 start.go:902] validating driver "docker" against &{Name:download-only-602899 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-602899 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:25:20.722424    7615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:25:20.794664    7615 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-27 23:25:20.785958494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:25:20.795118    7615 cni.go:84] Creating CNI manager for ""
	I1127 23:25:20.795141    7615 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1127 23:25:20.795153    7615 start_flags.go:323] config:
	{Name:download-only-602899 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.0 ClusterName:download-only-602899 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m
0s GPUs:}
	I1127 23:25:20.797257    7615 out.go:97] Starting control plane node download-only-602899 in cluster download-only-602899
	I1127 23:25:20.797274    7615 cache.go:121] Beginning downloading kic base image for docker with docker
	I1127 23:25:20.799184    7615 out.go:97] Pulling base image ...
	I1127 23:25:20.799211    7615 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime docker
	I1127 23:25:20.799253    7615 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local docker daemon
	I1127 23:25:20.815308    7615 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 to local cache
	I1127 23:25:20.815449    7615 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory
	I1127 23:25:20.815468    7615 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 in local cache directory, skipping pull
	I1127 23:25:20.815473    7615 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 exists in cache, skipping pull
	I1127 23:25:20.815483    7615 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 as a tarball
	I1127 23:25:20.873705    7615 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-docker-overlay2-arm64.tar.lz4
	I1127 23:25:20.873725    7615 cache.go:56] Caching tarball of preloaded images
	I1127 23:25:20.873870    7615 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime docker
	I1127 23:25:20.875897    7615 out.go:97] Downloading Kubernetes v1.29.0-rc.0 preload ...
	I1127 23:25:20.875914    7615 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I1127 23:25:20.988643    7615 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:89df40a674ee4016cba86d9b9e28bbb3 -> /home/jenkins/minikube-integration/17206-2172/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-602899"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.36s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-602899
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-341512 --alsologtostderr --binary-mirror http://127.0.0.1:46137 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-341512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-341512
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (64.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-807200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-807200 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m2.545819925s)
helpers_test.go:175: Cleaning up "offline-docker-807200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-807200
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-807200: (2.405958472s)
--- PASS: TestOffline (64.95s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-889952
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-889952: exit status 85 (93.293265ms)

                                                
                                                
-- stdout --
	* Profile "addons-889952" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-889952"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-889952
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-889952: exit status 85 (96.043848ms)

                                                
                                                
-- stdout --
	* Profile "addons-889952" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-889952"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (142.46s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-889952 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-889952 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m22.461729251s)
--- PASS: TestAddons/Setup (142.46s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 32.62031ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5mgs4" [faf11a16-62ba-4bff-9bb0-cbe6f73bcf87] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013525408s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-d7k5l" [314907ab-51ee-48cd-ab69-01ae6063dd8d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.016062744s
addons_test.go:339: (dbg) Run:  kubectl --context addons-889952 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-889952 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-889952 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.447515214s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 ip
2023/11/27 23:28:14 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.54s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jkpq7" [b17b21eb-d72b-43c3-b178-26a584582658] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01041244s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-889952
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-889952: (5.731285344s)
--- PASS: TestAddons/parallel/InspektorGadget (10.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 4.38329ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-5lpw9" [21ff43a6-cc55-40ff-90c0-8b20cc3d2980] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013217341s
addons_test.go:414: (dbg) Run:  kubectl --context addons-889952 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.86s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 32.429991ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-889952 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-889952 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0fe87d94-f40b-48a6-950b-7d5220ea0b05] Pending
helpers_test.go:344: "task-pv-pod" [0fe87d94-f40b-48a6-950b-7d5220ea0b05] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0fe87d94-f40b-48a6-950b-7d5220ea0b05] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.01496309s
addons_test.go:583: (dbg) Run:  kubectl --context addons-889952 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-889952 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-889952 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-889952 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-889952 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-889952 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-889952 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-889952 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f0371fc8-3d0e-4eee-aa6c-1628b0ca47de] Pending
helpers_test.go:344: "task-pv-pod-restore" [f0371fc8-3d0e-4eee-aa6c-1628b0ca47de] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f0371fc8-3d0e-4eee-aa6c-1628b0ca47de] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.019759505s
addons_test.go:625: (dbg) Run:  kubectl --context addons-889952 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-889952 delete pod task-pv-pod-restore: (1.164602767s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-889952 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-889952 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-889952 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.210741551s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-889952 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-889952 --alsologtostderr -v=1: (1.06918929s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-4m5wv" [80ec9543-db5c-4235-819c-b1d6c9a1bcd9] Pending
helpers_test.go:344: "headlamp-777fd4b855-4m5wv" [80ec9543-db5c-4235-819c-b1d6c9a1bcd9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-4m5wv" [80ec9543-db5c-4235-819c-b1d6c9a1bcd9] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.023431101s
--- PASS: TestAddons/parallel/Headlamp (11.09s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-tf22h" [b415069a-e3f0-425c-baf8-a63f487673fa] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.008457771s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-889952
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-889952 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-889952 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-889952 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [df9fd120-3c2d-4217-be0b-398e547e2424] Pending
helpers_test.go:344: "test-local-path" [df9fd120-3c2d-4217-be0b-398e547e2424] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [df9fd120-3c2d-4217-be0b-398e547e2424] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [df9fd120-3c2d-4217-be0b-398e547e2424] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.0097768s
addons_test.go:890: (dbg) Run:  kubectl --context addons-889952 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 ssh "cat /opt/local-path-provisioner/pvc-330b5ec0-97af-4e44-ab94-121d2102abbe_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-889952 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-889952 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-889952 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-889952 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.028365679s)
--- PASS: TestAddons/parallel/LocalPath (51.82s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-b7q6h" [39b12e34-7535-44fe-8b38-80c87e242bff] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.011480812s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-889952
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-889952 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-889952 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-889952
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-889952: (10.918725166s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-889952
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-889952
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-889952
--- PASS: TestAddons/StoppedEnableDisable (11.22s)

                                                
                                    
x
+
TestCertOptions (43.3s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-227335 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E1128 00:07:43.504037    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-227335 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (40.128590348s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-227335 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-227335 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-227335 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-227335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-227335
E1128 00:08:00.392025    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-227335: (2.299722883s)
--- PASS: TestCertOptions (43.30s)

                                                
                                    
x
+
TestCertExpiration (245.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-948892 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E1128 00:02:56.782184    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:03:00.392760    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-948892 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (34.041024403s)
E1128 00:03:37.742537    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:04:07.704648    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1128 00:04:59.663452    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:06:03.440560    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-948892 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-948892 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (29.558226211s)
helpers_test.go:175: Cleaning up "cert-expiration-948892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-948892
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-948892: (2.118993074s)
--- PASS: TestCertExpiration (245.72s)

                                                
                                    
x
+
TestDockerFlags (46.1s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-068102 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-068102 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.663657064s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-068102 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-068102 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-068102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-068102
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-068102: (3.563504537s)
--- PASS: TestDockerFlags (46.10s)

                                                
                                    
x
+
TestForceSystemdFlag (43.32s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-201639 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E1127 23:59:07.704610    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-201639 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.504190753s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-201639 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-201639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-201639
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-201639: (2.327653919s)
--- PASS: TestForceSystemdFlag (43.32s)

                                                
                                    
x
+
TestForceSystemdEnv (45.38s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-632778 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-632778 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.443884707s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-632778 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-632778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-632778
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-632778: (2.41102331s)
--- PASS: TestForceSystemdEnv (45.38s)

                                                
                                    
x
+
TestErrorSpam/setup (33.05s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-985156 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-985156 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-985156 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-985156 --driver=docker  --container-runtime=docker: (33.045441841s)
--- PASS: TestErrorSpam/setup (33.05s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 unpause
--- PASS: TestErrorSpam/unpause (1.51s)

                                                
                                    
x
+
TestErrorSpam/stop (2.12s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 stop: (1.904336178s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-985156 --log_dir /tmp/nospam-985156 stop
--- PASS: TestErrorSpam/stop (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17206-2172/.minikube/files/etc/test/nested/copy/7460/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (88.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-689033 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-689033 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m28.124437449s)
--- PASS: TestFunctional/serial/StartWithProxy (88.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-689033 --alsologtostderr -v=8
E1127 23:33:00.394603    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1127 23:33:00.400808    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1127 23:33:00.411004    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1127 23:33:00.431257    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1127 23:33:00.471537    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1127 23:33:00.551799    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1127 23:33:00.712406    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1127 23:33:01.032795    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1127 23:33:01.673705    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1127 23:33:02.954361    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-689033 --alsologtostderr -v=8: (37.056220482s)
functional_test.go:659: soft start took 37.061988275s for "functional-689033" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-689033 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 cache add registry.k8s.io/pause:3.1
E1127 23:33:05.515494    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-689033 cache add registry.k8s.io/pause:3.1: (1.027672757s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-689033 cache add registry.k8s.io/pause:3.3: (1.061602092s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-689033 /tmp/TestFunctionalserialCacheCmdcacheadd_local3887359377/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 cache add minikube-local-cache-test:functional-689033
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 cache delete minikube-local-cache-test:functional-689033
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-689033
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-689033 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (336.845397ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1127 23:33:10.635782    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 kubectl -- --context functional-689033 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-689033 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-689033 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1127 23:33:20.876685    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1127 23:33:41.356882    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-689033 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.109695648s)
functional_test.go:757: restart took 43.109789507s for "functional-689033" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.11s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-689033 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-689033 logs: (1.253370868s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 logs --file /tmp/TestFunctionalserialLogsFileCmd3395291449/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-689033 logs --file /tmp/TestFunctionalserialLogsFileCmd3395291449/001/logs.txt: (1.288351544s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.64s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-689033 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-689033
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-689033: exit status 115 (411.766404ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31066 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-689033 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-689033 config get cpus: exit status 14 (80.242418ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-689033 config get cpus: exit status 14 (85.989755ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-689033 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-689033 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 48552: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-689033 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-689033 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (254.880227ms)

                                                
                                                
-- stdout --
	* [functional-689033] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:34:51.119807   47762 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:34:51.119955   47762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:34:51.119983   47762 out.go:309] Setting ErrFile to fd 2...
	I1127 23:34:51.119991   47762 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:34:51.120264   47762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
	I1127 23:34:51.120855   47762 out.go:303] Setting JSON to false
	I1127 23:34:51.121897   47762 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1040,"bootTime":1701127051,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1127 23:34:51.121963   47762 start.go:138] virtualization:  
	I1127 23:34:51.125780   47762 out.go:177] * [functional-689033] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1127 23:34:51.128732   47762 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:34:51.130927   47762 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:34:51.128870   47762 notify.go:220] Checking for updates...
	I1127 23:34:51.135763   47762 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	I1127 23:34:51.138500   47762 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	I1127 23:34:51.140733   47762 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1127 23:34:51.143037   47762 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:34:51.145806   47762 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 23:34:51.146400   47762 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:34:51.176830   47762 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:34:51.176949   47762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:34:51.293784   47762 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-27 23:34:51.283574198 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:34:51.293888   47762 docker.go:295] overlay module found
	I1127 23:34:51.297494   47762 out.go:177] * Using the docker driver based on existing profile
	I1127 23:34:51.299588   47762 start.go:298] selected driver: docker
	I1127 23:34:51.299605   47762 start.go:902] validating driver "docker" against &{Name:functional-689033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-689033 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:34:51.299737   47762 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:34:51.302347   47762 out.go:177] 
	W1127 23:34:51.304799   47762 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1127 23:34:51.306877   47762 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-689033 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-689033 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-689033 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (326.978237ms)

                                                
                                                
-- stdout --
	* [functional-689033] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:34:51.817608   47940 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:34:51.817804   47940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:34:51.817811   47940 out.go:309] Setting ErrFile to fd 2...
	I1127 23:34:51.817817   47940 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:34:51.818979   47940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
	I1127 23:34:51.819378   47940 out.go:303] Setting JSON to false
	I1127 23:34:51.820317   47940 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1041,"bootTime":1701127051,"procs":250,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1050-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1127 23:34:51.820381   47940 start.go:138] virtualization:  
	I1127 23:34:51.823360   47940 out.go:177] * [functional-689033] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1127 23:34:51.825884   47940 out.go:177]   - MINIKUBE_LOCATION=17206
	I1127 23:34:51.826064   47940 notify.go:220] Checking for updates...
	I1127 23:34:51.831540   47940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 23:34:51.833743   47940 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	I1127 23:34:51.835750   47940 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	I1127 23:34:51.837826   47940 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1127 23:34:51.840670   47940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 23:34:51.843426   47940 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 23:34:51.844013   47940 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 23:34:51.883678   47940 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1127 23:34:51.883797   47940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:34:52.016249   47940 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-11-27 23:34:52.002832106 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:34:52.016350   47940 docker.go:295] overlay module found
	I1127 23:34:52.019968   47940 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1127 23:34:52.022561   47940 start.go:298] selected driver: docker
	I1127 23:34:52.022579   47940 start.go:902] validating driver "docker" against &{Name:functional-689033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-689033 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 23:34:52.022666   47940 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 23:34:52.025753   47940 out.go:177] 
	W1127 23:34:52.029718   47940 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1127 23:34:52.032157   47940 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-689033 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-689033 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-4tvpv" [6fc5da86-1445-4a85-8b62-853b48ff8384] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-4tvpv" [6fc5da86-1445-4a85-8b62-853b48ff8384] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.010458579s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32384
functional_test.go:1674: http://192.168.49.2:32384: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-4tvpv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32384
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d46697e9-2862-41aa-8349-affa4d79869a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01277419s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-689033 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-689033 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-689033 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-689033 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [801e857a-8b58-49c0-ad54-517b74e34fc0] Pending
helpers_test.go:344: "sp-pod" [801e857a-8b58-49c0-ad54-517b74e34fc0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [801e857a-8b58-49c0-ad54-517b74e34fc0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.03032691s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-689033 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-689033 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-689033 delete -f testdata/storage-provisioner/pod.yaml: (1.100276534s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-689033 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9b6c5b95-846c-4338-98e9-f01d88672be9] Pending
helpers_test.go:344: "sp-pod" [9b6c5b95-846c-4338-98e9-f01d88672be9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9b6c5b95-846c-4338-98e9-f01d88672be9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.022496071s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-689033 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.55s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh -n functional-689033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 cp functional-689033:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2165621727/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh -n functional-689033 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7460/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "sudo cat /etc/test/nested/copy/7460/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7460.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "sudo cat /etc/ssl/certs/7460.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7460.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "sudo cat /usr/share/ca-certificates/7460.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/74602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "sudo cat /etc/ssl/certs/74602.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/74602.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "sudo cat /usr/share/ca-certificates/74602.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-689033 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-689033 ssh "sudo systemctl is-active crio": exit status 1 (390.886162ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-689033 version -o=json --components: (1.217158658s)
--- PASS: TestFunctional/parallel/Version/components (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-689033 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-689033
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-689033
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-689033 image ls --format short --alsologtostderr:
I1127 23:34:57.712701   49148 out.go:296] Setting OutFile to fd 1 ...
I1127 23:34:57.712872   49148 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:57.712879   49148 out.go:309] Setting ErrFile to fd 2...
I1127 23:34:57.712884   49148 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:57.713187   49148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
I1127 23:34:57.713889   49148 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 23:34:57.714104   49148 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 23:34:57.714680   49148 cli_runner.go:164] Run: docker container inspect functional-689033 --format={{.State.Status}}
I1127 23:34:57.754769   49148 ssh_runner.go:195] Run: systemctl --version
I1127 23:34:57.754837   49148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-689033
I1127 23:34:57.776449   49148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/functional-689033/id_rsa Username:docker}
I1127 23:34:57.868194   49148 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-689033 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| docker.io/library/nginx                     | latest            | 5628e5ea3c17f | 192MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/nginx                     | alpine            | aae348c9fbd40 | 48.4MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/localhost/my-image                | functional-689033 | da372694e06ba | 1.41MB |
| docker.io/library/minikube-local-cache-test | functional-689033 | 3d6b7357ba6b5 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| gcr.io/google-containers/addon-resizer      | functional-689033 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-689033 image ls --format table --alsologtostderr:
I1127 23:35:01.993214   49515 out.go:296] Setting OutFile to fd 1 ...
I1127 23:35:01.993414   49515 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:35:01.993426   49515 out.go:309] Setting ErrFile to fd 2...
I1127 23:35:01.993432   49515 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:35:01.993720   49515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
I1127 23:35:01.994479   49515 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 23:35:01.994650   49515 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 23:35:01.995190   49515 cli_runner.go:164] Run: docker container inspect functional-689033 --format={{.State.Status}}
I1127 23:35:02.017850   49515 ssh_runner.go:195] Run: systemctl --version
I1127 23:35:02.017902   49515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-689033
I1127 23:35:02.035638   49515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/functional-689033/id_rsa Username:docker}
I1127 23:35:02.123838   49515 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/11/27 23:35:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-689033 image ls --format json --alsologtostderr:
[{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d6b7357ba6b5e4d3c2a6943b54174d8caadd815a629b0bdd3b39fd35e163bb7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-689033"],"size":"30"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"aae348c9fbd40035f9fc24e2c9ccb9a
c0a8977a3f3441a997bb40f6011d45e9b","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48400000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigest
s":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-689033"],"size":"32900000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"da372694e06ba7414bd9d78d07e2d44ba68b814f73142624ea5a84b5380b7936","repoDigests":[],"repoTags":["docker.io/localhost/my-image:f
unctional-689033"],"size":"1410000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-689033 image ls --format json --alsologtostderr:
I1127 23:35:01.762166   49488 out.go:296] Setting OutFile to fd 1 ...
I1127 23:35:01.762373   49488 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:35:01.762384   49488 out.go:309] Setting ErrFile to fd 2...
I1127 23:35:01.762390   49488 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:35:01.762645   49488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
I1127 23:35:01.763282   49488 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 23:35:01.763419   49488 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 23:35:01.763980   49488 cli_runner.go:164] Run: docker container inspect functional-689033 --format={{.State.Status}}
I1127 23:35:01.782347   49488 ssh_runner.go:195] Run: systemctl --version
I1127 23:35:01.782402   49488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-689033
I1127 23:35:01.800829   49488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/functional-689033/id_rsa Username:docker}
I1127 23:35:01.895787   49488 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-689033 image ls --format yaml --alsologtostderr:
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 3d6b7357ba6b5e4d3c2a6943b54174d8caadd815a629b0bdd3b39fd35e163bb7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-689033
size: "30"
- id: aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48400000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-689033
size: "32900000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-689033 image ls --format yaml --alsologtostderr:
I1127 23:34:57.993527   49185 out.go:296] Setting OutFile to fd 1 ...
I1127 23:34:57.993685   49185 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:57.993690   49185 out.go:309] Setting ErrFile to fd 2...
I1127 23:34:57.993696   49185 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:57.993957   49185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
I1127 23:34:57.994620   49185 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 23:34:57.994756   49185 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 23:34:57.995329   49185 cli_runner.go:164] Run: docker container inspect functional-689033 --format={{.State.Status}}
I1127 23:34:58.014213   49185 ssh_runner.go:195] Run: systemctl --version
I1127 23:34:58.014270   49185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-689033
I1127 23:34:58.034627   49185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/functional-689033/id_rsa Username:docker}
I1127 23:34:58.128010   49185 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-689033 ssh pgrep buildkitd: exit status 1 (364.132507ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image build -t localhost/my-image:functional-689033 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-689033 image build -t localhost/my-image:functional-689033 testdata/build --alsologtostderr: (2.819479116s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-689033 image build -t localhost/my-image:functional-689033 testdata/build --alsologtostderr:
I1127 23:34:58.626371   49261 out.go:296] Setting OutFile to fd 1 ...
I1127 23:34:58.626575   49261 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:58.626581   49261 out.go:309] Setting ErrFile to fd 2...
I1127 23:34:58.626587   49261 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 23:34:58.626903   49261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
I1127 23:34:58.627678   49261 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 23:34:58.628247   49261 config.go:182] Loaded profile config "functional-689033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 23:34:58.628748   49261 cli_runner.go:164] Run: docker container inspect functional-689033 --format={{.State.Status}}
I1127 23:34:58.653958   49261 ssh_runner.go:195] Run: systemctl --version
I1127 23:34:58.654009   49261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-689033
I1127 23:34:58.689306   49261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/functional-689033/id_rsa Username:docker}
I1127 23:34:58.780344   49261 build_images.go:151] Building image from path: /tmp/build.3407439908.tar
I1127 23:34:58.780415   49261 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1127 23:34:58.794138   49261 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3407439908.tar
I1127 23:34:58.800516   49261 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3407439908.tar: stat -c "%s %y" /var/lib/minikube/build/build.3407439908.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3407439908.tar': No such file or directory
I1127 23:34:58.800544   49261 ssh_runner.go:362] scp /tmp/build.3407439908.tar --> /var/lib/minikube/build/build.3407439908.tar (3072 bytes)
I1127 23:34:58.828542   49261 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3407439908
I1127 23:34:58.839188   49261 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3407439908 -xf /var/lib/minikube/build/build.3407439908.tar
I1127 23:34:58.850095   49261 docker.go:346] Building image: /var/lib/minikube/build/build.3407439908
I1127 23:34:58.850172   49261 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-689033 /var/lib/minikube/build/build.3407439908
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.7s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:da372694e06ba7414bd9d78d07e2d44ba68b814f73142624ea5a84b5380b7936 done
#8 naming to localhost/my-image:functional-689033 done
#8 DONE 0.1s
I1127 23:35:01.325343   49261 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-689033 /var/lib/minikube/build/build.3407439908: (2.475144895s)
I1127 23:35:01.325413   49261 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3407439908
I1127 23:35:01.338396   49261 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3407439908.tar
I1127 23:35:01.352652   49261 build_images.go:207] Built localhost/my-image:functional-689033 from /tmp/build.3407439908.tar
I1127 23:35:01.352720   49261 build_images.go:123] succeeded building to: functional-689033
I1127 23:35:01.352739   49261 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.845942621s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-689033
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-689033 docker-env) && out/minikube-linux-arm64 status -p functional-689033"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-689033 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image load --daemon gcr.io/google-containers/addon-resizer:functional-689033 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-689033 image load --daemon gcr.io/google-containers/addon-resizer:functional-689033 --alsologtostderr: (3.798547242s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-689033 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-689033 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-66cc6" [17730c48-56ad-4556-9786-5c4445d9df39] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-66cc6" [17730c48-56ad-4556-9786-5c4445d9df39] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.026065805s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image load --daemon gcr.io/google-containers/addon-resizer:functional-689033 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-689033 image load --daemon gcr.io/google-containers/addon-resizer:functional-689033 --alsologtostderr: (2.654954496s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.784534342s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-689033
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image load --daemon gcr.io/google-containers/addon-resizer:functional-689033 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-689033 image load --daemon gcr.io/google-containers/addon-resizer:functional-689033 --alsologtostderr: (3.220434469s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image save gcr.io/google-containers/addon-resizer:functional-689033 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image rm gcr.io/google-containers/addon-resizer:functional-689033 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-689033
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 image save --daemon gcr.io/google-containers/addon-resizer:functional-689033 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-689033 image save --daemon gcr.io/google-containers/addon-resizer:functional-689033 --alsologtostderr: (1.149327581s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-689033
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 service list -o json
functional_test.go:1493: Took "413.482936ms" to run "out/minikube-linux-arm64 -p functional-689033 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31610
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31610
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-689033 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-689033 tunnel --alsologtostderr]
E1127 23:34:22.317782    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-689033 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-689033 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45148: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-689033 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-689033 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [8058d627-fd90-4d1b-9760-344e387e5483] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [8058d627-fd90-4d1b-9760-344e387e5483] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.015915461s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-689033 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.220.206 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-689033 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "344.799529ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "99.596894ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "466.352412ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "114.433547ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-689033 /tmp/TestFunctionalparallelMountCmdany-port3847025956/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701128082852552908" to /tmp/TestFunctionalparallelMountCmdany-port3847025956/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701128082852552908" to /tmp/TestFunctionalparallelMountCmdany-port3847025956/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701128082852552908" to /tmp/TestFunctionalparallelMountCmdany-port3847025956/001/test-1701128082852552908
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-689033 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (402.858649ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 27 23:34 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 27 23:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 27 23:34 test-1701128082852552908
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh cat /mount-9p/test-1701128082852552908
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-689033 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3b089c15-ae1a-4af9-9978-afd7e48cf804] Pending
helpers_test.go:344: "busybox-mount" [3b089c15-ae1a-4af9-9978-afd7e48cf804] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3b089c15-ae1a-4af9-9978-afd7e48cf804] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3b089c15-ae1a-4af9-9978-afd7e48cf804] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.010365016s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-689033 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-689033 /tmp/TestFunctionalparallelMountCmdany-port3847025956/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-689033 /tmp/TestFunctionalparallelMountCmdspecific-port1208408155/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-689033 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (371.811701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-689033 /tmp/TestFunctionalparallelMountCmdspecific-port1208408155/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-689033 ssh "sudo umount -f /mount-9p": exit status 1 (416.515027ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-689033 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-689033 /tmp/TestFunctionalparallelMountCmdspecific-port1208408155/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-689033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2925143382/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-689033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2925143382/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-689033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2925143382/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-689033 ssh "findmnt -T" /mount1: exit status 1 (1.048528284s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-689033 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-689033 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-689033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2925143382/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-689033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2925143382/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-689033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2925143382/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.03s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-689033
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-689033
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-689033
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-482214 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-482214 --driver=docker  --container-runtime=docker: (34.302867941s)
--- PASS: TestImageBuild/serial/Setup (34.30s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-482214
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-482214: (1.914202522s)
--- PASS: TestImageBuild/serial/NormalBuild (1.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-482214
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-482214
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-482214
E1127 23:35:44.238399    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.72s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (71.14s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-916543 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-916543 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m11.139666554s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (71.14s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.05s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-916543 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-916543 addons enable ingress --alsologtostderr -v=5: (11.053742709s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-916543 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (45.65s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-701258 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E1127 23:38:28.079441    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-701258 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (45.650560233s)
--- PASS: TestJSONOutput/start/Command (45.65s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-701258 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-701258 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-701258 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-701258 --output=json --user=testUser: (7.873056916s)
--- PASS: TestJSONOutput/stop/Command (7.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-213766 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-213766 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.356727ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2d0c5f65-bc0b-4ab7-9604-20c647777b78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-213766] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"397c09c7-557a-4477-9488-f3d0ecc71f12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17206"}}
	{"specversion":"1.0","id":"a6000619-ec60-4401-a72e-55a45a048427","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d2d127db-ddc0-4d4f-b8f3-7fcfecdba954","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig"}}
	{"specversion":"1.0","id":"1bac69cf-09e2-476e-96a5-224a1e5f04ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube"}}
	{"specversion":"1.0","id":"b02d1d81-4f4a-4cc0-9f5b-e9e495bb9495","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"96b2f97c-010d-4b51-8805-e681d91604d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"726fea8d-0d8e-4c41-bae9-3f06285fb66c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-213766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-213766
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-536804 --network=
E1127 23:39:07.706447    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:07.711686    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:07.721900    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:07.742116    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:07.782353    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:07.863082    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:08.023722    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:08.344487    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:08.985316    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:10.265950    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:12.826481    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:17.946676    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:39:28.187203    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-536804 --network=: (30.716418249s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-536804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-536804
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-536804: (2.138785191s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.88s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-569582 --network=bridge
E1127 23:39:48.667388    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-569582 --network=bridge: (30.258371304s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-569582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-569582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-569582: (1.997917017s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.28s)

                                                
                                    
x
+
TestKicExistingNetwork (32.77s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-936306 --network=existing-network
E1127 23:40:29.628628    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-936306 --network=existing-network: (30.632981779s)
helpers_test.go:175: Cleaning up "existing-network-936306" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-936306
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-936306: (1.992887602s)
--- PASS: TestKicExistingNetwork (32.77s)

                                                
                                    
x
+
TestKicCustomSubnet (36.35s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-350934 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-350934 --subnet=192.168.60.0/24: (34.301473317s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-350934 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-350934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-350934
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-350934: (2.023170412s)
--- PASS: TestKicCustomSubnet (36.35s)

                                                
                                    
x
+
TestKicStaticIP (33.22s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-137962 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-137962 --static-ip=192.168.200.200: (30.988270894s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-137962 ip
helpers_test.go:175: Cleaning up "static-ip-137962" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-137962
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-137962: (2.057288828s)
--- PASS: TestKicStaticIP (33.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-936238 --driver=docker  --container-runtime=docker
E1127 23:41:51.548839    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:42:10.027692    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:10.032917    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:10.043111    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:10.063335    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:10.103561    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:10.183791    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:10.344111    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:10.664747    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:11.305316    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:12.585517    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:15.146404    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:20.267234    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-936238 --driver=docker  --container-runtime=docker: (32.302716473s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-938872 --driver=docker  --container-runtime=docker
E1127 23:42:30.508143    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:42:50.988961    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-938872 --driver=docker  --container-runtime=docker: (33.804090231s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-936238
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-938872
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-938872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-938872
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-938872: (2.042507774s)
helpers_test.go:175: Cleaning up "first-936238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-936238
E1127 23:43:00.391890    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-936238: (2.072229081s)
--- PASS: TestMinikubeProfile (71.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-975634 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-975634 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.430700243s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-975634 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-977365 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-977365 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.428250145s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-977365 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-975634 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-975634 --alsologtostderr -v=5: (1.487165418s)
--- PASS: TestMountStart/serial/DeleteFirst (1.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-977365 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-977365
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-977365: (1.222141583s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-977365
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-977365: (7.161237588s)
--- PASS: TestMountStart/serial/RestartStopped (8.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-977365 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-749825 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1127 23:44:07.704628    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:44:35.389053    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-749825 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m20.517584236s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 status --alsologtostderr
E1127 23:44:53.871004    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (47.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-749825 -- rollout status deployment/busybox: (3.434085697s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- exec busybox-5bc68d56bd-dg8zz -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- exec busybox-5bc68d56bd-j2hds -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- exec busybox-5bc68d56bd-dg8zz -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- exec busybox-5bc68d56bd-j2hds -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- exec busybox-5bc68d56bd-dg8zz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- exec busybox-5bc68d56bd-j2hds -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (47.91s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- exec busybox-5bc68d56bd-dg8zz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- exec busybox-5bc68d56bd-dg8zz -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- exec busybox-5bc68d56bd-j2hds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-749825 -- exec busybox-5bc68d56bd-j2hds -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.15s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-749825 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-749825 -v 3 --alsologtostderr: (19.75369497s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.50s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp testdata/cp-test.txt multinode-749825:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp multinode-749825:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3477071096/001/cp-test_multinode-749825.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp multinode-749825:/home/docker/cp-test.txt multinode-749825-m02:/home/docker/cp-test_multinode-749825_multinode-749825-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m02 "sudo cat /home/docker/cp-test_multinode-749825_multinode-749825-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp multinode-749825:/home/docker/cp-test.txt multinode-749825-m03:/home/docker/cp-test_multinode-749825_multinode-749825-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m03 "sudo cat /home/docker/cp-test_multinode-749825_multinode-749825-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp testdata/cp-test.txt multinode-749825-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp multinode-749825-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3477071096/001/cp-test_multinode-749825-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp multinode-749825-m02:/home/docker/cp-test.txt multinode-749825:/home/docker/cp-test_multinode-749825-m02_multinode-749825.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825 "sudo cat /home/docker/cp-test_multinode-749825-m02_multinode-749825.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp multinode-749825-m02:/home/docker/cp-test.txt multinode-749825-m03:/home/docker/cp-test_multinode-749825-m02_multinode-749825-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m03 "sudo cat /home/docker/cp-test_multinode-749825-m02_multinode-749825-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp testdata/cp-test.txt multinode-749825-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp multinode-749825-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3477071096/001/cp-test_multinode-749825-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp multinode-749825-m03:/home/docker/cp-test.txt multinode-749825:/home/docker/cp-test_multinode-749825-m03_multinode-749825.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825 "sudo cat /home/docker/cp-test_multinode-749825-m03_multinode-749825.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 cp multinode-749825-m03:/home/docker/cp-test.txt multinode-749825-m02:/home/docker/cp-test_multinode-749825-m03_multinode-749825-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 ssh -n multinode-749825-m02 "sudo cat /home/docker/cp-test_multinode-749825-m03_multinode-749825-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-749825 node stop m03: (1.235544219s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-749825 status: exit status 7 (548.903198ms)

                                                
                                                
-- stdout --
	multinode-749825
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-749825-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-749825-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-749825 status --alsologtostderr: exit status 7 (545.18462ms)

                                                
                                                
-- stdout --
	multinode-749825
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-749825-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-749825-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:46:16.596838  114152 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:46:16.597021  114152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:46:16.597033  114152 out.go:309] Setting ErrFile to fd 2...
	I1127 23:46:16.597040  114152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:46:16.597363  114152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
	I1127 23:46:16.597565  114152 out.go:303] Setting JSON to false
	I1127 23:46:16.597657  114152 mustload.go:65] Loading cluster: multinode-749825
	I1127 23:46:16.597748  114152 notify.go:220] Checking for updates...
	I1127 23:46:16.598129  114152 config.go:182] Loaded profile config "multinode-749825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 23:46:16.598147  114152 status.go:255] checking status of multinode-749825 ...
	I1127 23:46:16.599903  114152 cli_runner.go:164] Run: docker container inspect multinode-749825 --format={{.State.Status}}
	I1127 23:46:16.618399  114152 status.go:330] multinode-749825 host status = "Running" (err=<nil>)
	I1127 23:46:16.618471  114152 host.go:66] Checking if "multinode-749825" exists ...
	I1127 23:46:16.618788  114152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-749825
	I1127 23:46:16.635439  114152 host.go:66] Checking if "multinode-749825" exists ...
	I1127 23:46:16.635725  114152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:46:16.635773  114152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-749825
	I1127 23:46:16.667956  114152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/multinode-749825/id_rsa Username:docker}
	I1127 23:46:16.760619  114152 ssh_runner.go:195] Run: systemctl --version
	I1127 23:46:16.765582  114152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:46:16.778396  114152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1127 23:46:16.843540  114152 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-27 23:46:16.834270546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1050-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f Expected:d8f198a4ed8892c764191ef7b3b06d8a2eeb5c7f} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1127 23:46:16.844125  114152 kubeconfig.go:92] found "multinode-749825" server: "https://192.168.58.2:8443"
	I1127 23:46:16.844144  114152 api_server.go:166] Checking apiserver status ...
	I1127 23:46:16.844184  114152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 23:46:16.858513  114152 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2131/cgroup
	I1127 23:46:16.870060  114152 api_server.go:182] apiserver freezer: "4:freezer:/docker/45a70bd9ea20f82d6f9e379c6fadd5bcc9ff2be8ebd5903441586decd99c490a/kubepods/burstable/pod4c032c479e509a35fb6569ce2af10de5/2f120c27681f448c1920144c162155dd492d205660319e94f0a7fd4a8878db58"
	I1127 23:46:16.870133  114152 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/45a70bd9ea20f82d6f9e379c6fadd5bcc9ff2be8ebd5903441586decd99c490a/kubepods/burstable/pod4c032c479e509a35fb6569ce2af10de5/2f120c27681f448c1920144c162155dd492d205660319e94f0a7fd4a8878db58/freezer.state
	I1127 23:46:16.879852  114152 api_server.go:204] freezer state: "THAWED"
	I1127 23:46:16.879875  114152 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1127 23:46:16.888570  114152 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1127 23:46:16.888598  114152 status.go:421] multinode-749825 apiserver status = Running (err=<nil>)
	I1127 23:46:16.888609  114152 status.go:257] multinode-749825 status: &{Name:multinode-749825 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 23:46:16.888625  114152 status.go:255] checking status of multinode-749825-m02 ...
	I1127 23:46:16.888927  114152 cli_runner.go:164] Run: docker container inspect multinode-749825-m02 --format={{.State.Status}}
	I1127 23:46:16.907519  114152 status.go:330] multinode-749825-m02 host status = "Running" (err=<nil>)
	I1127 23:46:16.907544  114152 host.go:66] Checking if "multinode-749825-m02" exists ...
	I1127 23:46:16.907847  114152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-749825-m02
	I1127 23:46:16.923933  114152 host.go:66] Checking if "multinode-749825-m02" exists ...
	I1127 23:46:16.924235  114152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 23:46:16.924278  114152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-749825-m02
	I1127 23:46:16.951984  114152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32857 SSHKeyPath:/home/jenkins/minikube-integration/17206-2172/.minikube/machines/multinode-749825-m02/id_rsa Username:docker}
	I1127 23:46:17.040218  114152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 23:46:17.052723  114152 status.go:257] multinode-749825-m02 status: &{Name:multinode-749825-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1127 23:46:17.052756  114152 status.go:255] checking status of multinode-749825-m03 ...
	I1127 23:46:17.053093  114152 cli_runner.go:164] Run: docker container inspect multinode-749825-m03 --format={{.State.Status}}
	I1127 23:46:17.070546  114152 status.go:330] multinode-749825-m03 host status = "Stopped" (err=<nil>)
	I1127 23:46:17.070572  114152 status.go:343] host is not running, skipping remaining checks
	I1127 23:46:17.070580  114152 status.go:257] multinode-749825-m03 status: &{Name:multinode-749825-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-749825 node start m03 --alsologtostderr: (12.614662464s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-749825
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-749825
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-749825: (22.666253523s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-749825 --wait=true -v=8 --alsologtostderr
E1127 23:47:10.027726    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:47:37.712022    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1127 23:48:00.392809    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-749825 --wait=true -v=8 --alsologtostderr: (1m38.055509747s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-749825
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-749825 node delete m03: (4.461191958s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-749825 stop: (21.491025857s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-749825 status: exit status 7 (103.785245ms)

                                                
                                                
-- stdout --
	multinode-749825
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-749825-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-749825 status --alsologtostderr: exit status 7 (102.58231ms)

                                                
                                                
-- stdout --
	multinode-749825
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-749825-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 23:48:58.269889  130118 out.go:296] Setting OutFile to fd 1 ...
	I1127 23:48:58.270026  130118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:48:58.270050  130118 out.go:309] Setting ErrFile to fd 2...
	I1127 23:48:58.270069  130118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 23:48:58.270401  130118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17206-2172/.minikube/bin
	I1127 23:48:58.270627  130118 out.go:303] Setting JSON to false
	I1127 23:48:58.270743  130118 mustload.go:65] Loading cluster: multinode-749825
	I1127 23:48:58.270775  130118 notify.go:220] Checking for updates...
	I1127 23:48:58.271275  130118 config.go:182] Loaded profile config "multinode-749825": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 23:48:58.271290  130118 status.go:255] checking status of multinode-749825 ...
	I1127 23:48:58.271872  130118 cli_runner.go:164] Run: docker container inspect multinode-749825 --format={{.State.Status}}
	I1127 23:48:58.292090  130118 status.go:330] multinode-749825 host status = "Stopped" (err=<nil>)
	I1127 23:48:58.292111  130118 status.go:343] host is not running, skipping remaining checks
	I1127 23:48:58.292118  130118 status.go:257] multinode-749825 status: &{Name:multinode-749825 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 23:48:58.292139  130118 status.go:255] checking status of multinode-749825-m02 ...
	I1127 23:48:58.292438  130118 cli_runner.go:164] Run: docker container inspect multinode-749825-m02 --format={{.State.Status}}
	I1127 23:48:58.308399  130118 status.go:330] multinode-749825-m02 host status = "Stopped" (err=<nil>)
	I1127 23:48:58.308416  130118 status.go:343] host is not running, skipping remaining checks
	I1127 23:48:58.308423  130118 status.go:257] multinode-749825-m02 status: &{Name:multinode-749825-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-749825 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E1127 23:49:07.705013    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1127 23:49:23.439626    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-749825 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m25.153871221s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-749825 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-749825
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-749825-m02 --driver=docker  --container-runtime=docker
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-749825-m02 --driver=docker  --container-runtime=docker: exit status 14 (102.471111ms)

                                                
                                                
-- stdout --
	* [multinode-749825-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-749825-m02' is duplicated with machine name 'multinode-749825-m02' in profile 'multinode-749825'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-749825-m03 --driver=docker  --container-runtime=docker
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-749825-m03 --driver=docker  --container-runtime=docker: (34.670846191s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-749825
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-749825: exit status 80 (537.206361ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-749825
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-749825-m03 already exists in multinode-749825-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-749825-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-749825-m03: (2.278028651s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.67s)

                                                
                                    
x
+
TestPreload (172.74s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-901659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E1127 23:52:10.028098    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-901659 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m45.674723146s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-901659 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-901659 image pull gcr.io/k8s-minikube/busybox: (1.768463891s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-901659
E1127 23:53:00.392786    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-901659: (10.896159676s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-901659 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-901659 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (51.977749861s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-901659 image list
helpers_test.go:175: Cleaning up "test-preload-901659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-901659
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-901659: (2.18502656s)
--- PASS: TestPreload (172.74s)

                                                
                                    
x
+
TestScheduledStopUnix (106.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-045854 --memory=2048 --driver=docker  --container-runtime=docker
E1127 23:54:07.706597    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-045854 --memory=2048 --driver=docker  --container-runtime=docker: (32.675090733s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-045854 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-045854 -n scheduled-stop-045854
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-045854 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-045854 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-045854 -n scheduled-stop-045854
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-045854
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-045854 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1127 23:55:30.750078    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-045854
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-045854: exit status 7 (77.304588ms)

                                                
                                                
-- stdout --
	scheduled-stop-045854
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-045854 -n scheduled-stop-045854
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-045854 -n scheduled-stop-045854: exit status 7 (88.945938ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-045854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-045854
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-045854: (1.679271364s)
--- PASS: TestScheduledStopUnix (106.02s)

                                                
                                    
x
+
TestSkaffold (103.31s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4065117176 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-418549 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-418549 --memory=2600 --driver=docker  --container-runtime=docker: (31.299365237s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4065117176 run --minikube-profile skaffold-418549 --kube-context skaffold-418549 --status-check=true --port-forward=false --interactive=false
E1127 23:57:10.028505    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4065117176 run --minikube-profile skaffold-418549 --kube-context skaffold-418549 --status-check=true --port-forward=false --interactive=false: (57.240723307s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-66f75b7ccc-t4zs5" [cf369744-0df6-4873-829c-519519e5d407] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.022387713s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-674959d499-89mz7" [1422eb2a-a193-44e6-b0d4-902d26c89bf8] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.012095208s
helpers_test.go:175: Cleaning up "skaffold-418549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-418549
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-418549: (2.965751812s)
--- PASS: TestSkaffold (103.31s)

                                                
                                    
x
+
TestInsufficientStorage (10.63s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-084180 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-084180 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.29157996s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7907a974-226a-409a-bd71-03e76605ff5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-084180] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c276bbb0-48e4-4171-b0be-4b3b53a05eb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17206"}}
	{"specversion":"1.0","id":"9b1d73fb-7571-4b7e-8564-dd1f7f26b285","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f9052dbc-e9df-4197-8f93-40e2dd4f75ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig"}}
	{"specversion":"1.0","id":"f94c854e-9708-4c01-b10f-210fb828b662","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube"}}
	{"specversion":"1.0","id":"d66228b3-ebf3-494b-a6fe-ce05c018b15d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"70aa2ab0-3704-47c0-80f6-ef40f7e8aab8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5f7e709f-86ed-473e-a2c9-30c90dcd41ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d49fd869-fffe-402a-a04f-215f8683faf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0da556b3-0501-4489-bdc0-964110faf750","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0541968b-4b90-4878-9d44-fb599ffa5148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7989826e-ea20-4370-a8cd-70447607b26a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-084180 in cluster insufficient-storage-084180","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3af21f98-d007-42e8-a6bf-03be2c75610d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cde126fd-6db9-4aca-a413-98808c813480","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a59c39b-3a30-48ae-825d-02e5696c7505","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-084180 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-084180 --output=json --layout=cluster: exit status 7 (309.613201ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-084180","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-084180","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 23:57:37.420322  166442 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-084180" does not appear in /home/jenkins/minikube-integration/17206-2172/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-084180 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-084180 --output=json --layout=cluster: exit status 7 (325.763065ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-084180","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-084180","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 23:57:37.747387  166492 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-084180" does not appear in /home/jenkins/minikube-integration/17206-2172/kubeconfig
	E1127 23:57:37.759007  166492 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/insufficient-storage-084180/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-084180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-084180
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-084180: (1.706938912s)
--- PASS: TestInsufficientStorage (10.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (111.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.69670205.exe start -p running-upgrade-883237 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.69670205.exe start -p running-upgrade-883237 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m0.871066885s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-883237 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1128 00:09:07.704984    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-883237 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.860089916s)
helpers_test.go:175: Cleaning up "running-upgrade-883237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-883237
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-883237: (2.325573537s)
--- PASS: TestRunningBinaryUpgrade (111.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (414.62s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-870849 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-870849 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m15.047027286s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-870849
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-870849: (11.374268625s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-870849 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-870849 status --format={{.Host}}: exit status 7 (117.834569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-870849 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E1128 00:02:10.030651    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1128 00:02:15.820372    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:15.825631    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:15.835863    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:15.856091    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:15.896321    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:15.976611    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:16.136987    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:16.457692    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:17.098408    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:18.378964    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:20.940014    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:26.060173    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:02:36.301173    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-870849 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m46.546948454s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-870849 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-870849 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-870849 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (108.802394ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-870849] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-870849
	    minikube start -p kubernetes-upgrade-870849 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8708492 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-870849 --kubernetes-version=v1.29.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-870849 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-870849 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.921359362s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-870849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-870849
E1128 00:07:15.820499    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-870849: (2.392271111s)
--- PASS: TestKubernetesUpgrade (414.62s)

                                                
                                    
x
+
TestMissingContainerUpgrade (195.88s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.610873254.exe start -p missing-upgrade-155333 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.610873254.exe start -p missing-upgrade-155333 --memory=2200 --driver=docker  --container-runtime=docker: (1m58.201742963s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-155333
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-155333: (10.399946023s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-155333
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-155333 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-155333 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.746189348s)
helpers_test.go:175: Cleaning up "missing-upgrade-155333" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-155333
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-155333: (2.25155591s)
--- PASS: TestMissingContainerUpgrade (195.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965774 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-965774 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (130.570167ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-965774] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17206
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17206-2172/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17206-2172/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965774 --driver=docker  --container-runtime=docker
E1127 23:58:00.392151    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965774 --driver=docker  --container-runtime=docker: (42.142141974s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-965774 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965774 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965774 --no-kubernetes --driver=docker  --container-runtime=docker: (6.080042195s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-965774 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-965774 status -o json: exit status 2 (342.028781ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-965774","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-965774
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-965774: (1.777086352s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965774 --no-kubernetes --driver=docker  --container-runtime=docker
E1127 23:58:33.072425    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965774 --no-kubernetes --driver=docker  --container-runtime=docker: (8.282634689s)
--- PASS: TestNoKubernetes/serial/Start (8.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-965774 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-965774 "sudo systemctl is-active --quiet service kubelet": exit status 1 (450.692315ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-965774
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-965774: (1.324733433s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-965774 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-965774 --driver=docker  --container-runtime=docker: (8.663654216s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-965774 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-965774 "sudo systemctl is-active --quiet service kubelet": exit status 1 (410.361411ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.41s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (149.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.689126618.exe start -p stopped-upgrade-908777 --memory=2200 --vm-driver=docker  --container-runtime=docker
E1128 00:07:10.027351    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.689126618.exe start -p stopped-upgrade-908777 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m34.157045975s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.689126618.exe -p stopped-upgrade-908777 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.689126618.exe -p stopped-upgrade-908777 stop: (11.006112889s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-908777 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-908777 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (44.449754872s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (149.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-908777
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-908777: (2.131270309s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.13s)

                                                
                                    
x
+
TestPause/serial/Start (57.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-103397 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-103397 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (57.728327787s)
--- PASS: TestPause/serial/Start (57.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m18.666008208s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.67s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.03s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-103397 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-103397 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (38.010598086s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.03s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-103397 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-103397 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-103397 --output=json --layout=cluster: exit status 2 (358.590277ms)

                                                
                                                
-- stdout --
	{"Name":"pause-103397","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-103397","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-103397 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.58s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-103397 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.36s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-103397 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-103397 --alsologtostderr -v=5: (2.363480509s)
--- PASS: TestPause/serial/DeletePaused (2.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-811755 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-811755 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7zbt8" [cc245a78-aa71-4472-981b-4e7cf0931dd9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7zbt8" [cc245a78-aa71-4472-981b-4e7cf0931dd9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.017659735s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.42s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-103397
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-103397: exit status 1 (17.109321ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-103397: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (57.523429743s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-811755 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (85.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E1128 00:12:10.027842    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m25.034351141s)
--- PASS: TestNetworkPlugins/group/calico/Start (85.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (7.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-cp2l5" [48bd7d74-0a85-470c-9cf1-688889d1d2e7] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
E1128 00:12:10.750611    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
helpers_test.go:344: "kindnet-cp2l5" [48bd7d74-0a85-470c-9cf1-688889d1d2e7] Running
E1128 00:12:15.820643    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 7.041425158s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (7.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-811755 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-811755 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8v8g9" [c70a0ec5-479a-48c3-9cdc-7956d2006f09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8v8g9" [c70a0ec5-479a-48c3-9cdc-7956d2006f09] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.021965283s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-811755 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (69.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m9.056896915s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (69.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-p7q9m" [cb94460a-40cc-44e9-a76b-8200f65b7713] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.042001819s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-811755 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-811755 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4wpkm" [f0700ed8-c591-4fda-a5bd-56a8fc1db9e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4wpkm" [f0700ed8-c591-4fda-a5bd-56a8fc1db9e9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.013493754s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-811755 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (56.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E1128 00:14:07.704623    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (56.506784893s)
--- PASS: TestNetworkPlugins/group/false/Start (56.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-811755 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-811755 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4wmkj" [344a59bb-c62d-4aae-ba4a-ef5c572d9589] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4wmkj" [344a59bb-c62d-4aae-ba4a-ef5c572d9589] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.010540239s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-811755 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (89.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m29.693169012s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (89.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-811755 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-811755 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ckmfc" [6984e164-c7ba-43a4-a9e1-405c1cd5433f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ckmfc" [6984e164-c7ba-43a4-a9e1-405c1cd5433f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.009024931s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (26.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-811755 exec deployment/netcat -- nslookup kubernetes.default
E1128 00:15:13.073571    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context false-811755 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.268678628s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context false-811755 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context false-811755 exec deployment/netcat -- nslookup kubernetes.default: (10.21480102s)
--- PASS: TestNetworkPlugins/group/false/DNS (26.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E1128 00:16:11.465696    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:16:11.471732    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:16:11.482106    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:16:11.502412    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:16:11.543344    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:16:11.624035    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:16:11.784441    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:16:12.105344    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:16:12.745860    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:16:14.026903    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:16:16.587961    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m3.561293527s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-811755 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-811755 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bblpc" [9b823f6f-0737-4d17-b6ca-312403d7acda] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 00:16:21.708610    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-bblpc" [9b823f6f-0737-4d17-b6ca-312403d7acda] Running
E1128 00:16:31.949052    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.029936984s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-811755 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m33.500902341s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-c2b5m" [76257ae2-11e8-4607-943e-e687b3085991] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.042259159s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-811755 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-811755 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nw26j" [6f380a7b-006e-4037-9d9a-0035c1661da8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 00:17:10.027631    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1128 00:17:10.501619    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:17:10.506839    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:17:10.517053    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:17:10.537370    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:17:10.577714    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:17:10.658566    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:17:10.819330    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:17:11.139840    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:17:11.780502    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-nw26j" [6f380a7b-006e-4037-9d9a-0035c1661da8] Running
E1128 00:17:13.061598    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:17:15.621858    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:17:15.820255    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.008764968s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-811755 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (50.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E1128 00:17:51.464011    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:18:00.392112    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1128 00:18:17.074737    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:17.080010    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:17.090238    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:17.110481    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:17.150714    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:17.230994    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:17.391322    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:17.712447    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:18.352661    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:19.633663    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:22.194507    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:27.315494    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:32.425082    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-811755 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (50.362290153s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (50.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-811755 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-811755 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4x8lx" [3ed7a3fd-7d81-4fe5-a1e7-4416defaf731] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4x8lx" [3ed7a3fd-7d81-4fe5-a1e7-4416defaf731] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.014369045s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-811755 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-811755 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v5l4x" [874e47ff-a3ae-4602-a732-ce3a091c7fad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 00:18:37.556013    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:18:38.864529    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-v5l4x" [874e47ff-a3ae-4602-a732-ce3a091c7fad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.013356511s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-811755 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-811755 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-811755 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)
E1128 00:35:49.823443    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:36:11.466279    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:36:21.163571    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:36:22.193784    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:36:37.725465    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:37:01.413583    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:37:05.409612    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:37:10.027697    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1128 00:37:10.501399    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:37:15.820914    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:37:44.206502    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:38:00.392126    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1128 00:38:17.075047    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:38:24.459564    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:38:34.398456    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:38:37.490934    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:39:07.704444    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1128 00:39:13.484322    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:39:23.441270    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (144.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-966910 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E1128 00:19:13.484406    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:13.489992    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:13.500227    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:13.522422    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:13.564053    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:13.644840    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:13.805872    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:14.126669    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-966910 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m24.176427726s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (144.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-054375 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.0
E1128 00:19:14.767119    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:16.048050    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:18.609039    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:23.729631    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:33.970053    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:39.002702    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:19:54.345275    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:19:54.450793    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:19:59.151778    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:19:59.157369    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:19:59.167539    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:19:59.187717    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:19:59.227923    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:19:59.308165    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:19:59.468489    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:19:59.788979    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:20:00.429125    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:20:01.710149    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:20:04.270643    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:20:09.390973    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:20:19.632050    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-054375 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.0: (1m6.770108654s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-054375 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2bfcdd2f-dbf0-4ec6-9b7e-5ccc94bfacf7] Pending
helpers_test.go:344: "busybox" [2bfcdd2f-dbf0-4ec6-9b7e-5ccc94bfacf7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2bfcdd2f-dbf0-4ec6-9b7e-5ccc94bfacf7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.045708768s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-054375 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-054375 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-054375 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-054375 --alsologtostderr -v=3
E1128 00:20:35.411567    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:20:40.112804    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-054375 --alsologtostderr -v=3: (10.97220155s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-054375 -n no-preload-054375
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-054375 -n no-preload-054375: exit status 7 (96.315901ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-054375 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (340.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-054375 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.0
E1128 00:21:00.923488    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:21:11.466233    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:21:21.072984    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:21:21.163193    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:21:21.168431    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:21:21.178585    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:21:21.198823    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:21:21.239136    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:21:21.319412    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:21:21.479892    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:21:21.800489    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:21:22.440943    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:21:23.722024    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:21:26.282224    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:21:31.402616    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-054375 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.0: (5m40.154723482s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-054375 -n no-preload-054375
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (340.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-966910 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3716eb34-5ee3-4c17-ac3a-33d6ced263dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1128 00:21:39.151492    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
helpers_test.go:344: "busybox" [3716eb34-5ee3-4c17-ac3a-33d6ced263dc] Running
E1128 00:21:41.643708    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.029161017s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-966910 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-966910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-966910 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-966910 --alsologtostderr -v=3
E1128 00:21:57.332328    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-966910 --alsologtostderr -v=3: (10.999069869s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-966910 -n old-k8s-version-966910
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-966910 -n old-k8s-version-966910: exit status 7 (93.306836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-966910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (445.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-966910 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E1128 00:22:01.413427    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:01.418692    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:01.428914    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:01.449152    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:01.489741    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:01.569982    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:01.730473    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:02.051543    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:02.124741    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:22:02.692560    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:03.973069    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:06.533409    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:10.027425    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1128 00:22:10.501337    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:22:11.653534    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:15.819731    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:22:21.894609    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:38.186415    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:22:42.375619    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:22:42.993186    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:22:43.085453    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:22:43.441034    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1128 00:23:00.392070    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1128 00:23:17.075658    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:23:23.336377    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:23:34.398022    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:34.403256    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:34.413466    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:34.433719    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:34.473952    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:34.554287    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:34.714557    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:35.035031    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:35.675491    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:36.955748    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:37.490476    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:37.495781    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:37.506015    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:37.526236    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:37.566482    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:37.646817    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:37.807405    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:38.128007    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:38.768661    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:39.516755    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:40.049584    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:42.610394    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:44.637767    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:44.763982    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:23:47.730538    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:23:54.878567    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:23:57.971459    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:24:05.005599    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:24:07.705073    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1128 00:24:13.484299    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:24:15.358944    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:24:18.452408    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:24:41.173407    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:24:45.257573    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:24:56.320026    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:24:59.151532    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:24:59.412810    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:25:26.833404    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:26:11.465881    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
E1128 00:26:18.241037    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:26:21.163663    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:26:21.333773    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-966910 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (7m25.014546526s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-966910 -n old-k8s-version-966910
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (445.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ks8nm" [1604c918-9816-4ee1-a716-178f616f3e0a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ks8nm" [1604c918-9816-4ee1-a716-178f616f3e0a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.025645496s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-ks8nm" [1604c918-9816-4ee1-a716-178f616f3e0a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00922783s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-054375 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-054375 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-054375 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-054375 -n no-preload-054375
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-054375 -n no-preload-054375: exit status 2 (348.374904ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-054375 -n no-preload-054375
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-054375 -n no-preload-054375: exit status 2 (360.344782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-054375 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-054375 -n no-preload-054375
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-054375 -n no-preload-054375
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-171634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1128 00:26:48.846072    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
E1128 00:27:01.419258    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
E1128 00:27:10.027661    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/ingress-addon-legacy-916543/client.crt: no such file or directory
E1128 00:27:10.500893    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kindnet-811755/client.crt: no such file or directory
E1128 00:27:15.819723    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:27:29.098416    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/flannel-811755/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-171634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (46.748591599s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-171634 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8584aef9-e9cb-4800-884b-ebbb15c422c9] Pending
helpers_test.go:344: "busybox" [8584aef9-e9cb-4800-884b-ebbb15c422c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8584aef9-e9cb-4800-884b-ebbb15c422c9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.053567275s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-171634 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-171634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-171634 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.304589465s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-171634 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-171634 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-171634 --alsologtostderr -v=3: (10.990164775s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-171634 -n embed-certs-171634
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-171634 -n embed-certs-171634: exit status 7 (96.131079ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-171634 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (344.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-171634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1128 00:28:00.392546    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/addons-889952/client.crt: no such file or directory
E1128 00:28:17.074874    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
E1128 00:28:34.398454    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:28:37.490905    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:28:50.751432    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1128 00:29:02.081758    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/bridge-811755/client.crt: no such file or directory
E1128 00:29:05.174289    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/kubenet-811755/client.crt: no such file or directory
E1128 00:29:07.705022    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/functional-689033/client.crt: no such file or directory
E1128 00:29:13.484719    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-171634 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m43.54729181s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-171634 -n embed-certs-171634
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (344.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-q77vn" [30464b70-3dd6-4e31-b7aa-85498e109ae7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025398589s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-q77vn" [30464b70-3dd6-4e31-b7aa-85498e109ae7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009427208s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-966910 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-966910 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-966910 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-966910 -n old-k8s-version-966910
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-966910 -n old-k8s-version-966910: exit status 2 (371.803575ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-966910 -n old-k8s-version-966910
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-966910 -n old-k8s-version-966910: exit status 2 (348.718757ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-966910 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-966910 -n old-k8s-version-966910
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-966910 -n old-k8s-version-966910
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-172365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E1128 00:29:59.151757    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
E1128 00:30:22.140412    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:22.145704    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:22.156004    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:22.176319    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:22.216667    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:22.297015    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:22.457609    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:22.778077    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:23.418273    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:24.699420    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:27.259612    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:32.380610    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:30:42.621184    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
E1128 00:31:03.101383    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-172365 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (1m26.424871463s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (86.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-172365 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [65f03896-0930-4540-8719-3b6a4216640c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [65f03896-0930-4540-8719-3b6a4216640c] Running
E1128 00:31:11.465760    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/auto-811755/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.032417934s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-172365 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-172365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-172365 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.013476566s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-172365 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-172365 --alsologtostderr -v=3
E1128 00:31:21.163084    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/enable-default-cni-811755/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-172365 --alsologtostderr -v=3: (10.949088633s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-172365 -n default-k8s-diff-port-172365: exit status 7 (94.513089ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-172365 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8fb7n" [d9d6e1d1-81b0-4225-8e2b-7ce3bfb0c723] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8fb7n" [d9d6e1d1-81b0-4225-8e2b-7ce3bfb0c723] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.041998179s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8fb7n" [d9d6e1d1-81b0-4225-8e2b-7ce3bfb0c723] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008830109s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-171634 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-171634 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-171634 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-171634 -n embed-certs-171634
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-171634 -n embed-certs-171634: exit status 2 (342.218663ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-171634 -n embed-certs-171634
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-171634 -n embed-certs-171634: exit status 2 (354.601231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-171634 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-171634 -n embed-certs-171634
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-171634 -n embed-certs-171634
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-654646 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.0
E1128 00:34:13.484723    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
E1128 00:34:21.569382    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/old-k8s-version-966910/client.crt: no such file or directory
E1128 00:34:40.124783    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/calico-811755/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-654646 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.0: (48.950764296s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-654646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-654646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.249257849s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-654646 --alsologtostderr -v=3
E1128 00:34:59.152038    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/false-811755/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-654646 --alsologtostderr -v=3: (5.764916874s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-654646 -n newest-cni-654646
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-654646 -n newest-cni-654646: exit status 7 (101.626988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-654646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-654646 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.0
E1128 00:35:18.865678    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/skaffold-418549/client.crt: no such file or directory
E1128 00:35:22.140710    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/no-preload-054375/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-654646 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.0: (31.434248573s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-654646 -n newest-cni-654646
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-654646 "sudo crictl images -o json"
E1128 00:35:36.533865    7460 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17206-2172/.minikube/profiles/custom-flannel-811755/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-654646 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-654646 -n newest-cni-654646
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-654646 -n newest-cni-654646: exit status 2 (352.851685ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-654646 -n newest-cni-654646
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-654646 -n newest-cni-654646: exit status 2 (361.514609ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-654646 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-654646 -n newest-cni-654646
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-654646 -n newest-cni-654646
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.98s)

                                                
                                    

Test skip (27/329)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-856124 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-856124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-856124
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-811755 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-811755" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-811755

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-811755" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-811755"

                                                
                                                
----------------------- debugLogs end: cilium-811755 [took: 4.54762672s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-811755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-811755
--- SKIP: TestNetworkPlugins/group/cilium (4.77s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-439021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-439021
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard