Test Report: Docker_Linux_docker_arm64 18384

                    
                      818397ea37b8941bfdd3d988b855153c5c099b26:2024-03-14:33567
                    
                

Test fail (2/350)

Order failed test Duration
39 TestAddons/parallel/Ingress 37.49
262 TestScheduledStopUnix 35.3
x
+
TestAddons/parallel/Ingress (37.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-511560 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-511560 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-511560 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6e888d91-32ae-47b5-9be6-ab11957f2c68] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6e888d91-32ae-47b5-9be6-ab11957f2c68] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004780275s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-511560 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.070299081s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-511560 addons disable ingress --alsologtostderr -v=1: (7.709985639s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-511560
helpers_test.go:235: (dbg) docker inspect addons-511560:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a626887909014f81ecc6c135a725e56f21604b7c78c66672961614529ea967",
	        "Created": "2024-03-14T18:33:27.167862945Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 549582,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-14T18:33:27.462206988Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:db62270b4bb0cfcde696782f7a6322baca275275e31814ce9fd8998407bf461e",
	        "ResolvConfPath": "/var/lib/docker/containers/74a626887909014f81ecc6c135a725e56f21604b7c78c66672961614529ea967/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a626887909014f81ecc6c135a725e56f21604b7c78c66672961614529ea967/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a626887909014f81ecc6c135a725e56f21604b7c78c66672961614529ea967/hosts",
	        "LogPath": "/var/lib/docker/containers/74a626887909014f81ecc6c135a725e56f21604b7c78c66672961614529ea967/74a626887909014f81ecc6c135a725e56f21604b7c78c66672961614529ea967-json.log",
	        "Name": "/addons-511560",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-511560:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-511560",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f3aaed894fe759ea9e8dc6ce7ad2bd8d07dbbf57e403ed3de77b4d206d3dfdc4-init/diff:/var/lib/docker/overlay2/5d0772f9548c62b17706c652675b28e51ca47810b015447035374bcde04cf033/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f3aaed894fe759ea9e8dc6ce7ad2bd8d07dbbf57e403ed3de77b4d206d3dfdc4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f3aaed894fe759ea9e8dc6ce7ad2bd8d07dbbf57e403ed3de77b4d206d3dfdc4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f3aaed894fe759ea9e8dc6ce7ad2bd8d07dbbf57e403ed3de77b4d206d3dfdc4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-511560",
	                "Source": "/var/lib/docker/volumes/addons-511560/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-511560",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-511560",
	                "name.minikube.sigs.k8s.io": "addons-511560",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7052a916180edb70e9abbc817204751331f0cc7f4e7f5e23f4841a1e05dc6da4",
	            "SandboxKey": "/var/run/docker/netns/7052a916180e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-511560": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "74a626887909",
	                        "addons-511560"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "6db2e8935b9e50d8fc2b5dbb760380561f01379e30087749d8036a593ff463ea",
	                    "EndpointID": "e2b09f80a41f833831cddcc36b89f43e0b8c4ae10dc99a945734e572c1e01fb2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-511560",
	                        "74a626887909"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-511560 -n addons-511560
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-511560 logs -n 25: (1.173134059s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC | 14 Mar 24 18:33 UTC |
	| delete  | -p download-only-800676                                                                     | download-only-800676   | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC | 14 Mar 24 18:33 UTC |
	| delete  | -p download-only-747918                                                                     | download-only-747918   | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC | 14 Mar 24 18:33 UTC |
	| delete  | -p download-only-206632                                                                     | download-only-206632   | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC | 14 Mar 24 18:33 UTC |
	| delete  | -p download-only-800676                                                                     | download-only-800676   | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC | 14 Mar 24 18:33 UTC |
	| start   | --download-only -p                                                                          | download-docker-976617 | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC |                     |
	|         | download-docker-976617                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-976617                                                                   | download-docker-976617 | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC | 14 Mar 24 18:33 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-030368   | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC |                     |
	|         | binary-mirror-030368                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39555                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-030368                                                                     | binary-mirror-030368   | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC | 14 Mar 24 18:33 UTC |
	| addons  | enable dashboard -p                                                                         | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC |                     |
	|         | addons-511560                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC |                     |
	|         | addons-511560                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-511560 --wait=true                                                                | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC | 14 Mar 24 18:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=docker                                                                 |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-511560 ip                                                                            | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:35 UTC | 14 Mar 24 18:35 UTC |
	| addons  | addons-511560 addons disable                                                                | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:35 UTC | 14 Mar 24 18:35 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-511560 addons                                                                        | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:35 UTC | 14 Mar 24 18:35 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:36 UTC | 14 Mar 24 18:36 UTC |
	|         | addons-511560                                                                               |                        |         |         |                     |                     |
	| addons  | addons-511560 addons                                                                        | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:36 UTC | 14 Mar 24 18:36 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-511560 addons                                                                        | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:36 UTC | 14 Mar 24 18:36 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-511560 ssh curl -s                                                                   | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:36 UTC | 14 Mar 24 18:36 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-511560 ip                                                                            | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:36 UTC | 14 Mar 24 18:36 UTC |
	| addons  | disable nvidia-device-plugin                                                                | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:36 UTC | 14 Mar 24 18:36 UTC |
	|         | -p addons-511560                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-511560 ssh cat                                                                       | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:36 UTC | 14 Mar 24 18:36 UTC |
	|         | /opt/local-path-provisioner/pvc-e2c10d25-b178-46d7-b7e9-3f699f3ef4aa_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-511560 addons disable                                                                | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:36 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-511560 addons disable                                                                | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:36 UTC | 14 Mar 24 18:36 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-511560 addons disable                                                                | addons-511560          | jenkins | v1.32.0 | 14 Mar 24 18:36 UTC | 14 Mar 24 18:36 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:33:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:33:03.257462  549120 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:33:03.257656  549120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:33:03.257683  549120 out.go:304] Setting ErrFile to fd 2...
	I0314 18:33:03.257701  549120 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:33:03.258025  549120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
	I0314 18:33:03.258544  549120 out.go:298] Setting JSON to false
	I0314 18:33:03.259459  549120 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11727,"bootTime":1710429457,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 18:33:03.259560  549120 start.go:139] virtualization:  
	I0314 18:33:03.262572  549120 out.go:177] * [addons-511560] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 18:33:03.267391  549120 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:33:03.267527  549120 notify.go:220] Checking for updates...
	I0314 18:33:03.271948  549120 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:33:03.274215  549120 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	I0314 18:33:03.276440  549120 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	I0314 18:33:03.278380  549120 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0314 18:33:03.280311  549120 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:33:03.282665  549120 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:33:03.302736  549120 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 18:33:03.302875  549120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:33:03.367323  549120 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 18:33:03.358468606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 18:33:03.367438  549120 docker.go:295] overlay module found
	I0314 18:33:03.370024  549120 out.go:177] * Using the docker driver based on user configuration
	I0314 18:33:03.372299  549120 start.go:297] selected driver: docker
	I0314 18:33:03.372318  549120 start.go:901] validating driver "docker" against <nil>
	I0314 18:33:03.372332  549120 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:33:03.372952  549120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:33:03.424806  549120 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 18:33:03.41603929 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 18:33:03.424979  549120 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 18:33:03.425236  549120 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:33:03.427455  549120 out.go:177] * Using Docker driver with root privileges
	I0314 18:33:03.429532  549120 cni.go:84] Creating CNI manager for ""
	I0314 18:33:03.429577  549120 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 18:33:03.429591  549120 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 18:33:03.429681  549120 start.go:340] cluster config:
	{Name:addons-511560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-511560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:33:03.432130  549120 out.go:177] * Starting "addons-511560" primary control-plane node in "addons-511560" cluster
	I0314 18:33:03.434046  549120 cache.go:121] Beginning downloading kic base image for docker with docker
	I0314 18:33:03.436290  549120 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0314 18:33:03.438192  549120 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:33:03.438259  549120 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 18:33:03.438271  549120 cache.go:56] Caching tarball of preloaded images
	I0314 18:33:03.438280  549120 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 18:33:03.438361  549120 preload.go:173] Found /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 18:33:03.438372  549120 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 18:33:03.438713  549120 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/config.json ...
	I0314 18:33:03.438739  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/config.json: {Name:mk61edfe3212b291ea58fe520ef7414960b212f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:03.453189  549120 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 18:33:03.453327  549120 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0314 18:33:03.453348  549120 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0314 18:33:03.453353  549120 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0314 18:33:03.453361  549120 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0314 18:33:03.453366  549120 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f from local cache
	I0314 18:33:19.590974  549120 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f from cached tarball
	I0314 18:33:19.591014  549120 cache.go:194] Successfully downloaded all kic artifacts
	I0314 18:33:19.591060  549120 start.go:360] acquireMachinesLock for addons-511560: {Name:mk381cd54d8a0a74a7ba10f1625068fc88dfaea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:33:19.591185  549120 start.go:364] duration metric: took 100.48µs to acquireMachinesLock for "addons-511560"
	I0314 18:33:19.591230  549120 start.go:93] Provisioning new machine with config: &{Name:addons-511560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-511560 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:33:19.591315  549120 start.go:125] createHost starting for "" (driver="docker")
	I0314 18:33:19.593836  549120 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0314 18:33:19.594120  549120 start.go:159] libmachine.API.Create for "addons-511560" (driver="docker")
	I0314 18:33:19.594158  549120 client.go:168] LocalClient.Create starting
	I0314 18:33:19.594279  549120 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem
	I0314 18:33:20.012604  549120 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/cert.pem
	I0314 18:33:21.039075  549120 cli_runner.go:164] Run: docker network inspect addons-511560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0314 18:33:21.054125  549120 cli_runner.go:211] docker network inspect addons-511560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0314 18:33:21.054220  549120 network_create.go:281] running [docker network inspect addons-511560] to gather additional debugging logs...
	I0314 18:33:21.054240  549120 cli_runner.go:164] Run: docker network inspect addons-511560
	W0314 18:33:21.069059  549120 cli_runner.go:211] docker network inspect addons-511560 returned with exit code 1
	I0314 18:33:21.069086  549120 network_create.go:284] error running [docker network inspect addons-511560]: docker network inspect addons-511560: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-511560 not found
	I0314 18:33:21.069098  549120 network_create.go:286] output of [docker network inspect addons-511560]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-511560 not found
	
	** /stderr **
	I0314 18:33:21.069203  549120 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0314 18:33:21.084979  549120 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002585060}
	I0314 18:33:21.085022  549120 network_create.go:124] attempt to create docker network addons-511560 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0314 18:33:21.085079  549120 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-511560 addons-511560
	I0314 18:33:21.150413  549120 network_create.go:108] docker network addons-511560 192.168.49.0/24 created
	I0314 18:33:21.150451  549120 kic.go:121] calculated static IP "192.168.49.2" for the "addons-511560" container
	I0314 18:33:21.150527  549120 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0314 18:33:21.164639  549120 cli_runner.go:164] Run: docker volume create addons-511560 --label name.minikube.sigs.k8s.io=addons-511560 --label created_by.minikube.sigs.k8s.io=true
	I0314 18:33:21.181740  549120 oci.go:103] Successfully created a docker volume addons-511560
	I0314 18:33:21.181845  549120 cli_runner.go:164] Run: docker run --rm --name addons-511560-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-511560 --entrypoint /usr/bin/test -v addons-511560:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0314 18:33:23.136466  549120 cli_runner.go:217] Completed: docker run --rm --name addons-511560-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-511560 --entrypoint /usr/bin/test -v addons-511560:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib: (1.954566216s)
	I0314 18:33:23.136497  549120 oci.go:107] Successfully prepared a docker volume addons-511560
	I0314 18:33:23.136519  549120 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:33:23.136540  549120 kic.go:194] Starting extracting preloaded images to volume ...
	I0314 18:33:23.136628  549120 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-511560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0314 18:33:27.102766  549120 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-511560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir: (3.966086292s)
	I0314 18:33:27.102799  549120 kic.go:203] duration metric: took 3.966255135s to extract preloaded images to volume ...
	W0314 18:33:27.102955  549120 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0314 18:33:27.103082  549120 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0314 18:33:27.153916  549120 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-511560 --name addons-511560 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-511560 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-511560 --network addons-511560 --ip 192.168.49.2 --volume addons-511560:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f
	I0314 18:33:27.470755  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Running}}
	I0314 18:33:27.498600  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:33:27.522646  549120 cli_runner.go:164] Run: docker exec addons-511560 stat /var/lib/dpkg/alternatives/iptables
	I0314 18:33:27.590856  549120 oci.go:144] the created container "addons-511560" has a running status.
	I0314 18:33:27.590882  549120 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa...
	I0314 18:33:28.322951  549120 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0314 18:33:28.350884  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:33:28.369833  549120 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0314 18:33:28.369857  549120 kic_runner.go:114] Args: [docker exec --privileged addons-511560 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0314 18:33:28.438954  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:33:28.470975  549120 machine.go:94] provisionDockerMachine start ...
	I0314 18:33:28.471086  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:33:28.504770  549120 main.go:141] libmachine: Using SSH client type: native
	I0314 18:33:28.505048  549120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I0314 18:33:28.505065  549120 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:33:28.649461  549120 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-511560
	
	I0314 18:33:28.649535  549120 ubuntu.go:169] provisioning hostname "addons-511560"
	I0314 18:33:28.649650  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:33:28.669864  549120 main.go:141] libmachine: Using SSH client type: native
	I0314 18:33:28.670152  549120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I0314 18:33:28.670171  549120 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-511560 && echo "addons-511560" | sudo tee /etc/hostname
	I0314 18:33:28.824526  549120 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-511560
	
	I0314 18:33:28.824616  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:33:28.842897  549120 main.go:141] libmachine: Using SSH client type: native
	I0314 18:33:28.843153  549120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I0314 18:33:28.843174  549120 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-511560' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-511560/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-511560' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:33:28.981292  549120 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:33:28.981325  549120 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18384-542901/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-542901/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-542901/.minikube}
	I0314 18:33:28.981355  549120 ubuntu.go:177] setting up certificates
	I0314 18:33:28.981366  549120 provision.go:84] configureAuth start
	I0314 18:33:28.981454  549120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-511560
	I0314 18:33:28.997618  549120 provision.go:143] copyHostCerts
	I0314 18:33:28.997723  549120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-542901/.minikube/ca.pem (1078 bytes)
	I0314 18:33:28.997878  549120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-542901/.minikube/cert.pem (1123 bytes)
	I0314 18:33:28.997943  549120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-542901/.minikube/key.pem (1679 bytes)
	I0314 18:33:28.997998  549120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-542901/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca-key.pem org=jenkins.addons-511560 san=[127.0.0.1 192.168.49.2 addons-511560 localhost minikube]
	I0314 18:33:29.532365  549120 provision.go:177] copyRemoteCerts
	I0314 18:33:29.532429  549120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:33:29.532560  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:33:29.548467  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:33:29.646045  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 18:33:29.669853  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 18:33:29.693524  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 18:33:29.717334  549120 provision.go:87] duration metric: took 735.950147ms to configureAuth
	I0314 18:33:29.717366  549120 ubuntu.go:193] setting minikube options for container-runtime
	I0314 18:33:29.717572  549120 config.go:182] Loaded profile config "addons-511560": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:33:29.717635  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:33:29.733135  549120 main.go:141] libmachine: Using SSH client type: native
	I0314 18:33:29.733398  549120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I0314 18:33:29.733437  549120 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 18:33:29.874086  549120 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0314 18:33:29.874110  549120 ubuntu.go:71] root file system type: overlay
	I0314 18:33:29.874290  549120 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 18:33:29.874385  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:33:29.892137  549120 main.go:141] libmachine: Using SSH client type: native
	I0314 18:33:29.892392  549120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I0314 18:33:29.892476  549120 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 18:33:30.049784  549120 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 18:33:30.049881  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:33:30.071240  549120 main.go:141] libmachine: Using SSH client type: native
	I0314 18:33:30.071521  549120 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33509 <nil> <nil>}
	I0314 18:33:30.071547  549120 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 18:33:30.871544  549120 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-03-06 16:31:58.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-03-14 18:33:30.043982976 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0314 18:33:30.871579  549120 machine.go:97] duration metric: took 2.400583157s to provisionDockerMachine
	I0314 18:33:30.871591  549120 client.go:171] duration metric: took 11.27742317s to LocalClient.Create
	I0314 18:33:30.871610  549120 start.go:167] duration metric: took 11.277492158s to libmachine.API.Create "addons-511560"
	I0314 18:33:30.871623  549120 start.go:293] postStartSetup for "addons-511560" (driver="docker")
	I0314 18:33:30.871635  549120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:33:30.871713  549120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:33:30.871758  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:33:30.888334  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:33:30.986742  549120 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:33:30.989991  549120 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0314 18:33:30.990027  549120 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0314 18:33:30.990067  549120 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0314 18:33:30.990082  549120 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0314 18:33:30.990094  549120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-542901/.minikube/addons for local assets ...
	I0314 18:33:30.990184  549120 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-542901/.minikube/files for local assets ...
	I0314 18:33:30.990215  549120 start.go:296] duration metric: took 118.585468ms for postStartSetup
	I0314 18:33:30.990536  549120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-511560
	I0314 18:33:31.018241  549120 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/config.json ...
	I0314 18:33:31.018748  549120 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:33:31.018803  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:33:31.034615  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:33:31.134199  549120 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0314 18:33:31.138839  549120 start.go:128] duration metric: took 11.547506545s to createHost
	I0314 18:33:31.138907  549120 start.go:83] releasing machines lock for "addons-511560", held for 11.547708464s
	I0314 18:33:31.138985  549120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-511560
	I0314 18:33:31.154515  549120 ssh_runner.go:195] Run: cat /version.json
	I0314 18:33:31.154573  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:33:31.154836  549120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:33:31.154896  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:33:31.174496  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:33:31.175346  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:33:31.268969  549120 ssh_runner.go:195] Run: systemctl --version
	I0314 18:33:31.397742  549120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 18:33:31.401940  549120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0314 18:33:31.426837  549120 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0314 18:33:31.426967  549120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:33:31.457400  549120 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0314 18:33:31.457460  549120 start.go:494] detecting cgroup driver to use...
	I0314 18:33:31.457495  549120 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0314 18:33:31.457613  549120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:33:31.475110  549120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 18:33:31.484765  549120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 18:33:31.494444  549120 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 18:33:31.494535  549120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 18:33:31.504204  549120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:33:31.514129  549120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 18:33:31.524024  549120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 18:33:31.534104  549120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:33:31.543599  549120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 18:33:31.553229  549120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:33:31.561671  549120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:33:31.570296  549120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:33:31.648889  549120 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 18:33:31.737533  549120 start.go:494] detecting cgroup driver to use...
	I0314 18:33:31.737596  549120 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0314 18:33:31.737659  549120 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 18:33:31.757270  549120 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0314 18:33:31.757369  549120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 18:33:31.769091  549120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:33:31.794110  549120 ssh_runner.go:195] Run: which cri-dockerd
	I0314 18:33:31.798029  549120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 18:33:31.807613  549120 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 18:33:31.828370  549120 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 18:33:31.934875  549120 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 18:33:32.035857  549120 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 18:33:32.036060  549120 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 18:33:32.064136  549120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:33:32.153847  549120 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 18:33:32.410362  549120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 18:33:32.422794  549120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:33:32.435484  549120 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 18:33:32.533838  549120 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 18:33:32.625107  549120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:33:32.713266  549120 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 18:33:32.727224  549120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 18:33:32.738517  549120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:33:32.825224  549120 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 18:33:32.905050  549120 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 18:33:32.905138  549120 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 18:33:32.910601  549120 start.go:562] Will wait 60s for crictl version
	I0314 18:33:32.910667  549120 ssh_runner.go:195] Run: which crictl
	I0314 18:33:32.914372  549120 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:33:32.962380  549120 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 18:33:32.962451  549120 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:33:32.984614  549120 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 18:33:33.020602  549120 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 18:33:33.020753  549120 cli_runner.go:164] Run: docker network inspect addons-511560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0314 18:33:33.041297  549120 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0314 18:33:33.045286  549120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:33:33.056285  549120 kubeadm.go:877] updating cluster {Name:addons-511560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-511560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:33:33.056414  549120 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:33:33.056481  549120 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 18:33:33.074994  549120 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 18:33:33.075037  549120 docker.go:615] Images already preloaded, skipping extraction
	I0314 18:33:33.075123  549120 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 18:33:33.092903  549120 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 18:33:33.092926  549120 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:33:33.092938  549120 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.28.4 docker true true} ...
	I0314 18:33:33.093051  549120 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-511560 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-511560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:33:33.093117  549120 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 18:33:33.138394  549120 cni.go:84] Creating CNI manager for ""
	I0314 18:33:33.138429  549120 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 18:33:33.138442  549120 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:33:33.138463  549120 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-511560 NodeName:addons-511560 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:33:33.138604  549120 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-511560"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:33:33.138691  549120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:33:33.147443  549120 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:33:33.147518  549120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 18:33:33.156222  549120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0314 18:33:33.174032  549120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:33:33.191332  549120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0314 18:33:33.208925  549120 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0314 18:33:33.212212  549120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:33:33.223533  549120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:33:33.309954  549120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:33:33.333892  549120 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560 for IP: 192.168.49.2
	I0314 18:33:33.333916  549120 certs.go:194] generating shared ca certs ...
	I0314 18:33:33.333935  549120 certs.go:226] acquiring lock for ca certs: {Name:mk75d138939e967a050dd4b5a1fc56eb3400f415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:33.334659  549120 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-542901/.minikube/ca.key
	I0314 18:33:34.492842  549120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-542901/.minikube/ca.crt ...
	I0314 18:33:34.492879  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/ca.crt: {Name:mk16a998c324e2aa3aaa872b471fea761d090391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:34.493081  549120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-542901/.minikube/ca.key ...
	I0314 18:33:34.493097  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/ca.key: {Name:mk8f5c954329005af088ad92905d534529f67777 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:34.493695  549120 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-542901/.minikube/proxy-client-ca.key
	I0314 18:33:34.764695  549120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-542901/.minikube/proxy-client-ca.crt ...
	I0314 18:33:34.764725  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/proxy-client-ca.crt: {Name:mk85ea75425226705272d9c96fcd8fbbec116657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:34.764907  549120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-542901/.minikube/proxy-client-ca.key ...
	I0314 18:33:34.764920  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/proxy-client-ca.key: {Name:mk53963e85618db9777f7f7d73734845f1a4cc54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:34.765003  549120 certs.go:256] generating profile certs ...
	I0314 18:33:34.765066  549120 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.key
	I0314 18:33:34.765081  549120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt with IP's: []
	I0314 18:33:34.998460  549120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt ...
	I0314 18:33:34.998496  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: {Name:mkb920529e4e78e0323cc0eebed0f71dd01e7787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:34.998734  549120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.key ...
	I0314 18:33:34.998744  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.key: {Name:mk7a8fc03d0c0086d83f78a9f44ec46a108375b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:34.999656  549120 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.key.5d58307f
	I0314 18:33:34.999692  549120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.crt.5d58307f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0314 18:33:35.811525  549120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.crt.5d58307f ...
	I0314 18:33:35.811562  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.crt.5d58307f: {Name:mk0f9582cacd81690a5cc7fa33f3719a635a1685 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:35.811751  549120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.key.5d58307f ...
	I0314 18:33:35.811773  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.key.5d58307f: {Name:mk0349fe36040f6739873d7e5c0159c22ccc5461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:35.812349  549120 certs.go:381] copying /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.crt.5d58307f -> /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.crt
	I0314 18:33:35.812441  549120 certs.go:385] copying /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.key.5d58307f -> /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.key
	I0314 18:33:35.812498  549120 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/proxy-client.key
	I0314 18:33:35.812520  549120 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/proxy-client.crt with IP's: []
	I0314 18:33:36.527736  549120 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/proxy-client.crt ...
	I0314 18:33:36.527766  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/proxy-client.crt: {Name:mk1e52cf377a8c77fa8c47866e3d10d5819ad824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:36.527983  549120 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/proxy-client.key ...
	I0314 18:33:36.527999  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/proxy-client.key: {Name:mka52895d7f59fe5eb772002f2597329fd96adfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:33:36.528666  549120 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca-key.pem (1675 bytes)
	I0314 18:33:36.528719  549120 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem (1078 bytes)
	I0314 18:33:36.528753  549120 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:33:36.528783  549120 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/key.pem (1679 bytes)
	I0314 18:33:36.529469  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:33:36.555540  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 18:33:36.586094  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:33:36.610095  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 18:33:36.633967  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0314 18:33:36.658631  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 18:33:36.682569  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:33:36.707690  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 18:33:36.732688  549120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:33:36.757204  549120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:33:36.775449  549120 ssh_runner.go:195] Run: openssl version
	I0314 18:33:36.780744  549120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:33:36.790252  549120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:33:36.793867  549120 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:33 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:33:36.793950  549120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:33:36.800978  549120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:33:36.810862  549120 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:33:36.814220  549120 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:33:36.814269  549120 kubeadm.go:391] StartCluster: {Name:addons-511560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-511560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:33:36.814396  549120 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 18:33:36.831086  549120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 18:33:36.839905  549120 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 18:33:36.848938  549120 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0314 18:33:36.849006  549120 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 18:33:36.858158  549120 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 18:33:36.858198  549120 kubeadm.go:156] found existing configuration files:
	
	I0314 18:33:36.858248  549120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 18:33:36.867264  549120 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 18:33:36.867338  549120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 18:33:36.876115  549120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 18:33:36.885258  549120 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 18:33:36.885363  549120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 18:33:36.894266  549120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 18:33:36.903190  549120 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 18:33:36.903264  549120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 18:33:36.911904  549120 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 18:33:36.920794  549120 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 18:33:36.920887  549120 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 18:33:36.929600  549120 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0314 18:33:36.973227  549120 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 18:33:36.973511  549120 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 18:33:37.048682  549120 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0314 18:33:37.048882  549120 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0314 18:33:37.048948  549120 kubeadm.go:309] OS: Linux
	I0314 18:33:37.049014  549120 kubeadm.go:309] CGROUPS_CPU: enabled
	I0314 18:33:37.049083  549120 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0314 18:33:37.049159  549120 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0314 18:33:37.049225  549120 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0314 18:33:37.049296  549120 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0314 18:33:37.049364  549120 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0314 18:33:37.049465  549120 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0314 18:33:37.049543  549120 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0314 18:33:37.049635  549120 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0314 18:33:37.121717  549120 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 18:33:37.121860  549120 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 18:33:37.121978  549120 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 18:33:37.452102  549120 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 18:33:37.456601  549120 out.go:204]   - Generating certificates and keys ...
	I0314 18:33:37.456717  549120 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 18:33:37.456803  549120 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 18:33:38.308145  549120 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 18:33:38.490026  549120 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 18:33:39.015386  549120 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 18:33:39.515677  549120 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 18:33:39.968310  549120 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 18:33:39.968663  549120 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-511560 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0314 18:33:40.309780  549120 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 18:33:40.310148  549120 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-511560 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0314 18:33:40.827746  549120 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 18:33:41.376436  549120 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 18:33:41.609102  549120 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 18:33:41.609325  549120 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 18:33:42.246694  549120 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 18:33:42.898527  549120 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 18:33:43.331443  549120 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 18:33:43.834762  549120 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 18:33:43.835527  549120 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 18:33:43.838538  549120 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 18:33:43.840885  549120 out.go:204]   - Booting up control plane ...
	I0314 18:33:43.840988  549120 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 18:33:43.841069  549120 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 18:33:43.841924  549120 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 18:33:43.852807  549120 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 18:33:43.853860  549120 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 18:33:43.853921  549120 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 18:33:43.955790  549120 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 18:33:51.958024  549120 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.002148 seconds
	I0314 18:33:51.958144  549120 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 18:33:51.974239  549120 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 18:33:52.499877  549120 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 18:33:52.500074  549120 kubeadm.go:309] [mark-control-plane] Marking the node addons-511560 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 18:33:53.017919  549120 kubeadm.go:309] [bootstrap-token] Using token: 2xsbtz.r51wj1oiadiunfd7
	I0314 18:33:53.019859  549120 out.go:204]   - Configuring RBAC rules ...
	I0314 18:33:53.019998  549120 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 18:33:53.025603  549120 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 18:33:53.036154  549120 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 18:33:53.040450  549120 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 18:33:53.044534  549120 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 18:33:53.048646  549120 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 18:33:53.062970  549120 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 18:33:53.269648  549120 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 18:33:53.431176  549120 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 18:33:53.432833  549120 kubeadm.go:309] 
	I0314 18:33:53.432930  549120 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 18:33:53.432950  549120 kubeadm.go:309] 
	I0314 18:33:53.433035  549120 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 18:33:53.433050  549120 kubeadm.go:309] 
	I0314 18:33:53.433076  549120 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 18:33:53.433459  549120 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 18:33:53.433517  549120 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 18:33:53.433522  549120 kubeadm.go:309] 
	I0314 18:33:53.433581  549120 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 18:33:53.433587  549120 kubeadm.go:309] 
	I0314 18:33:53.433639  549120 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 18:33:53.433645  549120 kubeadm.go:309] 
	I0314 18:33:53.433694  549120 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 18:33:53.433767  549120 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 18:33:53.433833  549120 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 18:33:53.433853  549120 kubeadm.go:309] 
	I0314 18:33:53.437009  549120 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 18:33:53.437098  549120 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 18:33:53.437103  549120 kubeadm.go:309] 
	I0314 18:33:53.437821  549120 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 2xsbtz.r51wj1oiadiunfd7 \
	I0314 18:33:53.437930  549120 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a4fe8fb4b69a78f77e63084830195d73baa70d21faafa8aaf573cb10334eb29d \
	I0314 18:33:53.438093  549120 kubeadm.go:309] 	--control-plane 
	I0314 18:33:53.438102  549120 kubeadm.go:309] 
	I0314 18:33:53.438184  549120 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 18:33:53.438189  549120 kubeadm.go:309] 
	I0314 18:33:53.438267  549120 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 2xsbtz.r51wj1oiadiunfd7 \
	I0314 18:33:53.438366  549120 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a4fe8fb4b69a78f77e63084830195d73baa70d21faafa8aaf573cb10334eb29d 
	I0314 18:33:53.446515  549120 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0314 18:33:53.446736  549120 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 18:33:53.446775  549120 cni.go:84] Creating CNI manager for ""
	I0314 18:33:53.446818  549120 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 18:33:53.450745  549120 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 18:33:53.452878  549120 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 18:33:53.465271  549120 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 18:33:53.499160  549120 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 18:33:53.499279  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:53.499352  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-511560 minikube.k8s.io/updated_at=2024_03_14T18_33_53_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=addons-511560 minikube.k8s.io/primary=true
	I0314 18:33:53.802941  549120 ops.go:34] apiserver oom_adj: -16
	I0314 18:33:53.803115  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:54.303562  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:54.803449  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:55.303733  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:55.803808  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:56.303865  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:56.803925  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:57.303344  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:57.803988  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:58.303747  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:58.803489  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:59.303329  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:33:59.803808  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:00.304208  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:00.803643  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:01.303195  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:01.803411  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:02.304231  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:02.803279  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:03.303844  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:03.803390  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:04.303864  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:04.803239  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:05.303303  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:05.803701  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:06.303220  549120 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:34:06.405975  549120 kubeadm.go:1106] duration metric: took 12.906738011s to wait for elevateKubeSystemPrivileges
	W0314 18:34:06.406019  549120 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 18:34:06.406026  549120 kubeadm.go:393] duration metric: took 29.591761208s to StartCluster
	I0314 18:34:06.406042  549120 settings.go:142] acquiring lock: {Name:mkfc2f1554604a8791fad9c92df19434d12a3d71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:34:06.406656  549120 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-542901/kubeconfig
	I0314 18:34:06.407129  549120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/kubeconfig: {Name:mkede4700b9e8f4a9de6d389efb476a6ed252758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:34:06.408200  549120 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 18:34:06.411796  549120 out.go:177] * Verifying Kubernetes components...
	I0314 18:34:06.408319  549120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 18:34:06.408483  549120 config.go:182] Loaded profile config "addons-511560": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:34:06.408494  549120 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0314 18:34:06.414270  549120 addons.go:69] Setting yakd=true in profile "addons-511560"
	I0314 18:34:06.414283  549120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:34:06.414291  549120 addons.go:234] Setting addon yakd=true in "addons-511560"
	I0314 18:34:06.414322  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.414394  549120 addons.go:69] Setting ingress-dns=true in profile "addons-511560"
	I0314 18:34:06.414416  549120 addons.go:234] Setting addon ingress-dns=true in "addons-511560"
	I0314 18:34:06.414458  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.414813  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.414902  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.416563  549120 addons.go:69] Setting cloud-spanner=true in profile "addons-511560"
	I0314 18:34:06.416656  549120 addons.go:234] Setting addon cloud-spanner=true in "addons-511560"
	I0314 18:34:06.416720  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.416724  549120 addons.go:69] Setting inspektor-gadget=true in profile "addons-511560"
	I0314 18:34:06.416770  549120 addons.go:234] Setting addon inspektor-gadget=true in "addons-511560"
	I0314 18:34:06.416797  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.417138  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.417322  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.423659  549120 addons.go:69] Setting metrics-server=true in profile "addons-511560"
	I0314 18:34:06.424302  549120 addons.go:234] Setting addon metrics-server=true in "addons-511560"
	I0314 18:34:06.424713  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.424011  549120 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-511560"
	I0314 18:34:06.429878  549120 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-511560"
	I0314 18:34:06.429947  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.424031  549120 addons.go:69] Setting registry=true in profile "addons-511560"
	I0314 18:34:06.444782  549120 addons.go:234] Setting addon registry=true in "addons-511560"
	I0314 18:34:06.445215  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.424042  549120 addons.go:69] Setting storage-provisioner=true in profile "addons-511560"
	I0314 18:34:06.446309  549120 addons.go:234] Setting addon storage-provisioner=true in "addons-511560"
	I0314 18:34:06.446362  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.446815  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.447207  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.457045  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.424089  549120 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-511560"
	I0314 18:34:06.459158  549120 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-511560"
	I0314 18:34:06.424099  549120 addons.go:69] Setting volumesnapshots=true in profile "addons-511560"
	I0314 18:34:06.459481  549120 addons.go:234] Setting addon volumesnapshots=true in "addons-511560"
	I0314 18:34:06.459527  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.459914  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.424211  549120 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-511560"
	I0314 18:34:06.479641  549120 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-511560"
	I0314 18:34:06.479684  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.480193  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.517446  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.424216  549120 addons.go:69] Setting default-storageclass=true in profile "addons-511560"
	I0314 18:34:06.517739  549120 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-511560"
	I0314 18:34:06.518016  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.424221  549120 addons.go:69] Setting gcp-auth=true in profile "addons-511560"
	I0314 18:34:06.520175  549120 mustload.go:65] Loading cluster: addons-511560
	I0314 18:34:06.520352  549120 config.go:182] Loaded profile config "addons-511560": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:34:06.520598  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.424225  549120 addons.go:69] Setting ingress=true in profile "addons-511560"
	I0314 18:34:06.531884  549120 addons.go:234] Setting addon ingress=true in "addons-511560"
	I0314 18:34:06.531939  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.532367  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.444602  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.625649  549120 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0314 18:34:06.644456  549120 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0314 18:34:06.644509  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0314 18:34:06.644580  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:06.679073  549120 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0314 18:34:06.698192  549120 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-511560"
	I0314 18:34:06.710895  549120 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0314 18:34:06.710944  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0314 18:34:06.711016  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:06.711234  549120 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0314 18:34:06.715197  549120 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0314 18:34:06.711470  549120 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0314 18:34:06.711476  549120 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0314 18:34:06.711488  549120 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0314 18:34:06.711491  549120 out.go:177]   - Using image docker.io/registry:2.8.3
	I0314 18:34:06.711525  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.725029  549120 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0314 18:34:06.722081  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.747596  549120 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0314 18:34:06.753562  549120 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0314 18:34:06.751800  549120 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0314 18:34:06.751839  549120 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0314 18:34:06.751866  549120 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0314 18:34:06.751912  549120 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 18:34:06.753131  549120 addons.go:234] Setting addon default-storageclass=true in "addons-511560"
	I0314 18:34:06.762341  549120 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0314 18:34:06.760132  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0314 18:34:06.760141  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0314 18:34:06.760147  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0314 18:34:06.760156  549120 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0314 18:34:06.760211  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.765135  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:06.765188  549120 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 18:34:06.765194  549120 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0314 18:34:06.765222  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:06.765252  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:06.793116  549120 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0314 18:34:06.793139  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0314 18:34:06.793204  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:06.772539  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 18:34:06.814732  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:06.827404  549120 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0314 18:34:06.773603  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:06.772651  549120 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:34:06.829041  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:06.831230  549120 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 18:34:06.833413  549120 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 18:34:06.835652  549120 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0314 18:34:06.857253  549120 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0314 18:34:06.857338  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0314 18:34:06.857557  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:06.861511  549120 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0314 18:34:06.861544  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0314 18:34:06.861633  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:06.869512  549120 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0314 18:34:06.856341  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 18:34:06.888481  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:06.938903  549120 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0314 18:34:06.941581  549120 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0314 18:34:06.941608  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0314 18:34:06.941715  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:07.005805  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:07.028121  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:07.033356  549120 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0314 18:34:07.039672  549120 out.go:177]   - Using image docker.io/busybox:stable
	I0314 18:34:07.033329  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:07.033627  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:07.036368  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:07.042457  549120 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0314 18:34:07.042476  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0314 18:34:07.042541  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:07.055764  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:07.067547  549120 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 18:34:07.069744  549120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:34:07.091251  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:07.103127  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:07.111534  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:07.133700  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:07.162185  549120 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 18:34:07.162207  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 18:34:07.162271  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:07.163095  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	W0314 18:34:07.169044  549120 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0314 18:34:07.169084  549120 retry.go:31] will retry after 223.719844ms: ssh: handshake failed: EOF
	I0314 18:34:07.174215  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:07.222593  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	W0314 18:34:07.240168  549120 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0314 18:34:07.240202  549120 retry.go:31] will retry after 234.970983ms: ssh: handshake failed: EOF
	W0314 18:34:07.485909  549120 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0314 18:34:07.485992  549120 retry.go:31] will retry after 546.261784ms: ssh: handshake failed: EOF
	I0314 18:34:07.702123  549120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 18:34:07.702149  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0314 18:34:07.887472  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0314 18:34:07.891328  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0314 18:34:07.933356  549120 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0314 18:34:07.933383  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0314 18:34:07.990554  549120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 18:34:07.990639  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 18:34:08.074114  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0314 18:34:08.111361  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0314 18:34:08.122578  549120 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0314 18:34:08.122644  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0314 18:34:08.127380  549120 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0314 18:34:08.127444  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0314 18:34:08.131811  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:34:08.135418  549120 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0314 18:34:08.135446  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0314 18:34:08.140414  549120 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0314 18:34:08.140441  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0314 18:34:08.155383  549120 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0314 18:34:08.155410  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0314 18:34:08.227213  549120 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 18:34:08.227240  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 18:34:08.246815  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0314 18:34:08.274955  549120 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0314 18:34:08.275014  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0314 18:34:08.314934  549120 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0314 18:34:08.314960  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0314 18:34:08.347786  549120 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0314 18:34:08.347813  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0314 18:34:08.438054  549120 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0314 18:34:08.438085  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0314 18:34:08.449767  549120 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0314 18:34:08.449793  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0314 18:34:08.540429  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 18:34:08.571975  549120 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0314 18:34:08.572005  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0314 18:34:08.585203  549120 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0314 18:34:08.585227  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0314 18:34:08.769125  549120 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0314 18:34:08.769153  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0314 18:34:08.784401  549120 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0314 18:34:08.784428  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0314 18:34:08.787437  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0314 18:34:08.835597  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 18:34:08.920666  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0314 18:34:08.995411  549120 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0314 18:34:08.995449  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0314 18:34:09.118540  549120 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0314 18:34:09.118569  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0314 18:34:09.333612  549120 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0314 18:34:09.333638  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0314 18:34:09.419217  549120 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0314 18:34:09.419244  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0314 18:34:09.552983  549120 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 18:34:09.553008  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0314 18:34:09.770048  549120 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0314 18:34:09.770074  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0314 18:34:09.772202  549120 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0314 18:34:09.772226  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0314 18:34:09.904697  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 18:34:09.989093  549120 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0314 18:34:09.989122  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0314 18:34:10.016204  549120 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0314 18:34:10.016237  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0314 18:34:10.332975  549120 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0314 18:34:10.333002  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0314 18:34:10.350041  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0314 18:34:10.500977  549120 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0314 18:34:10.501005  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0314 18:34:10.544112  549120 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.474336251s)
	I0314 18:34:10.544967  549120 node_ready.go:35] waiting up to 6m0s for node "addons-511560" to be "Ready" ...
	I0314 18:34:10.545160  549120 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.47758739s)
	I0314 18:34:10.545185  549120 start.go:948] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0314 18:34:10.550577  549120 node_ready.go:49] node "addons-511560" has status "Ready":"True"
	I0314 18:34:10.550604  549120 node_ready.go:38] duration metric: took 5.614522ms for node "addons-511560" to be "Ready" ...
	I0314 18:34:10.550614  549120 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:34:10.569915  549120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:10.812479  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.92496884s)
	I0314 18:34:10.871098  549120 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0314 18:34:10.871122  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0314 18:34:11.022381  549120 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0314 18:34:11.022412  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0314 18:34:11.053888  549120 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-511560" context rescaled to 1 replicas
	I0314 18:34:11.277387  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0314 18:34:12.583476  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:13.443300  549120 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0314 18:34:13.443389  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:13.477310  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:14.519557  549120 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0314 18:34:14.648950  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:14.732196  549120 addons.go:234] Setting addon gcp-auth=true in "addons-511560"
	I0314 18:34:14.732304  549120 host.go:66] Checking if "addons-511560" exists ...
	I0314 18:34:14.732843  549120 cli_runner.go:164] Run: docker container inspect addons-511560 --format={{.State.Status}}
	I0314 18:34:14.763387  549120 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0314 18:34:14.763440  549120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-511560
	I0314 18:34:14.786436  549120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33509 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/addons-511560/id_rsa Username:docker}
	I0314 18:34:16.597976  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.706609562s)
	I0314 18:34:16.598070  549120 addons.go:470] Verifying addon ingress=true in "addons-511560"
	I0314 18:34:16.600508  549120 out.go:177] * Verifying ingress addon...
	I0314 18:34:16.598345  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.524207125s)
	I0314 18:34:16.598385  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.486960977s)
	I0314 18:34:16.598424  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.466589966s)
	I0314 18:34:16.598485  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.351649709s)
	I0314 18:34:16.598571  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.058100387s)
	I0314 18:34:16.598613  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.811150814s)
	I0314 18:34:16.598634  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.76300622s)
	I0314 18:34:16.598683  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.677988841s)
	I0314 18:34:16.598768  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.694043221s)
	I0314 18:34:16.598823  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.248755275s)
	I0314 18:34:16.601039  549120 addons.go:470] Verifying addon metrics-server=true in "addons-511560"
	I0314 18:34:16.601243  549120 addons.go:470] Verifying addon registry=true in "addons-511560"
	W0314 18:34:16.601389  549120 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0314 18:34:16.605527  549120 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0314 18:34:16.607242  549120 out.go:177] * Verifying registry addon...
	I0314 18:34:16.610992  549120 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0314 18:34:16.607339  549120 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-511560 service yakd-dashboard -n yakd-dashboard
	
	I0314 18:34:16.607356  549120 retry.go:31] will retry after 249.354688ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0314 18:34:16.654124  549120 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0314 18:34:16.654200  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:16.662797  549120 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0314 18:34:16.662819  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0314 18:34:16.672495  549120 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0314 18:34:16.863534  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 18:34:17.078266  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:17.111925  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:17.116303  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:17.612218  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:17.616091  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:18.115650  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:18.121298  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:18.581732  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.304293454s)
	I0314 18:34:18.581777  549120 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-511560"
	I0314 18:34:18.585885  549120 out.go:177] * Verifying csi-hostpath-driver addon...
	I0314 18:34:18.582003  549120 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.818593326s)
	I0314 18:34:18.592866  549120 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 18:34:18.590912  549120 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0314 18:34:18.597457  549120 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0314 18:34:18.599799  549120 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0314 18:34:18.599829  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0314 18:34:18.603307  549120 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0314 18:34:18.603377  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:18.611766  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:18.616115  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:18.677237  549120 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0314 18:34:18.677261  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0314 18:34:18.716862  549120 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0314 18:34:18.716894  549120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0314 18:34:18.753009  549120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0314 18:34:19.079771  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:19.108320  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:19.132802  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:19.134095  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:19.323305  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.459726509s)
	I0314 18:34:19.602568  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:19.614781  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:19.617939  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:20.116573  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:20.158095  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:20.158998  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:20.191840  549120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.438765421s)
	I0314 18:34:20.196744  549120 addons.go:470] Verifying addon gcp-auth=true in "addons-511560"
	I0314 18:34:20.200980  549120 out.go:177] * Verifying gcp-auth addon...
	I0314 18:34:20.204386  549120 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0314 18:34:20.239640  549120 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0314 18:34:20.239717  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:20.602611  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:20.615576  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:20.620562  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:20.708713  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:21.102012  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:21.115313  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:21.118740  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:21.208489  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:21.576993  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:21.600331  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:21.611543  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:21.615681  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:21.708514  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:22.102424  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:22.112200  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:22.115703  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:22.209044  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:22.601192  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:22.612310  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:22.618180  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:22.708938  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:23.104305  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:23.112882  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:23.115864  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:23.208754  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:23.577217  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:23.601303  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:23.611744  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:23.616351  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:23.708309  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:24.102863  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:24.124685  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:24.125935  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:24.208810  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:24.601227  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:24.614752  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:24.618803  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:24.708966  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:25.102982  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:25.116037  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:25.120613  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:25.208272  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:25.577715  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:25.601922  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:25.616205  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:25.617564  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:25.708305  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:26.101544  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:26.112309  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:26.115275  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:26.208318  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:26.600230  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:26.611310  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:26.615231  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:26.708142  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:27.100645  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:27.112164  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:27.115403  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:27.208214  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:27.601434  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:27.611963  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:27.616796  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:27.709027  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:28.078114  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:28.102286  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:28.113977  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:28.117022  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:28.208713  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:28.600935  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:28.612281  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:28.616403  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:28.708034  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:29.101164  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:29.121180  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:29.122446  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:29.208212  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:29.601341  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:29.611907  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:29.616609  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:29.708869  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:30.102134  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:30.113083  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:30.118725  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:30.209104  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:30.576894  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:30.602081  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:30.611301  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:30.615361  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:30.707957  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:31.108267  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:31.114291  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:31.119895  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:31.209120  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:31.602121  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:31.614521  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:31.619009  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:31.708371  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:32.100951  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:32.112322  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:32.115828  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:32.209126  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:32.600948  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:32.611311  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:32.615580  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:32.708398  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:33.077273  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:33.102904  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:33.116567  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:33.120598  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:33.208324  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:33.600527  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:33.616726  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:33.617590  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:33.710537  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:34.100969  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:34.112594  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:34.115911  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:34.208501  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:34.600962  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:34.614711  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:34.618305  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:34.709038  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:35.093940  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:35.127071  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:35.128391  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:35.139374  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:35.209056  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:35.600672  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:35.615106  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:35.623527  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:35.708376  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:36.102568  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:36.115711  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:36.120296  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:36.208157  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:36.601332  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:36.611607  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:36.616567  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:36.708384  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:37.101688  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:37.112193  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:37.117212  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:34:37.208608  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:37.577053  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:37.601824  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:37.612676  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:37.617770  549120 kapi.go:107] duration metric: took 21.006772567s to wait for kubernetes.io/minikube-addons=registry ...
	I0314 18:34:37.710802  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:38.101530  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:38.112279  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:38.208277  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:38.602228  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:38.612592  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:38.708557  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:39.102405  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:39.114763  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:39.208987  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:39.577154  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:39.601465  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:39.612304  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:39.711173  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:40.101435  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:40.112671  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:40.209310  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:40.601785  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:40.612712  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:40.708735  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:41.101540  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:41.111907  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:41.208639  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:41.600263  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:41.611452  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:41.708090  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:42.078850  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:42.104106  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:42.114982  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:42.210514  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:42.603203  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:42.616955  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:42.721108  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:43.101742  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:43.112334  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:43.208815  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:43.601340  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:43.611534  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:43.708136  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:44.101336  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:44.111523  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:44.208304  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:44.577715  549120 pod_ready.go:102] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"False"
	I0314 18:34:44.602800  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:44.614726  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:44.708544  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:45.110641  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:45.115775  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:45.209805  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:45.602931  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:45.612955  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:45.708707  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:46.101563  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:46.112223  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:46.209194  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:46.577474  549120 pod_ready.go:92] pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace has status "Ready":"True"
	I0314 18:34:46.577546  549120 pod_ready.go:81] duration metric: took 36.007596227s for pod "coredns-5dd5756b68-5t9fq" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:46.577572  549120 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vkbd7" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:46.580007  549120 pod_ready.go:97] error getting pod "coredns-5dd5756b68-vkbd7" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vkbd7" not found
	I0314 18:34:46.580071  549120 pod_ready.go:81] duration metric: took 2.478516ms for pod "coredns-5dd5756b68-vkbd7" in "kube-system" namespace to be "Ready" ...
	E0314 18:34:46.580097  549120 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-vkbd7" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-vkbd7" not found
	I0314 18:34:46.580118  549120 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-511560" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:46.586046  549120 pod_ready.go:92] pod "etcd-addons-511560" in "kube-system" namespace has status "Ready":"True"
	I0314 18:34:46.586110  549120 pod_ready.go:81] duration metric: took 5.957273ms for pod "etcd-addons-511560" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:46.586136  549120 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-511560" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:46.592175  549120 pod_ready.go:92] pod "kube-apiserver-addons-511560" in "kube-system" namespace has status "Ready":"True"
	I0314 18:34:46.592245  549120 pod_ready.go:81] duration metric: took 6.088882ms for pod "kube-apiserver-addons-511560" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:46.592272  549120 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-511560" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:46.602731  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:46.604264  549120 pod_ready.go:92] pod "kube-controller-manager-addons-511560" in "kube-system" namespace has status "Ready":"True"
	I0314 18:34:46.604283  549120 pod_ready.go:81] duration metric: took 11.991066ms for pod "kube-controller-manager-addons-511560" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:46.604294  549120 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f8mpx" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:46.615869  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:46.708164  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:46.774587  549120 pod_ready.go:92] pod "kube-proxy-f8mpx" in "kube-system" namespace has status "Ready":"True"
	I0314 18:34:46.774613  549120 pod_ready.go:81] duration metric: took 170.31165ms for pod "kube-proxy-f8mpx" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:46.774625  549120 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-511560" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:47.100695  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:47.112185  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:47.174407  549120 pod_ready.go:92] pod "kube-scheduler-addons-511560" in "kube-system" namespace has status "Ready":"True"
	I0314 18:34:47.174479  549120 pod_ready.go:81] duration metric: took 399.845612ms for pod "kube-scheduler-addons-511560" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:47.174507  549120 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-s5nv9" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:47.208584  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:47.574821  549120 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-s5nv9" in "kube-system" namespace has status "Ready":"True"
	I0314 18:34:47.574843  549120 pod_ready.go:81] duration metric: took 400.316936ms for pod "nvidia-device-plugin-daemonset-s5nv9" in "kube-system" namespace to be "Ready" ...
	I0314 18:34:47.574854  549120 pod_ready.go:38] duration metric: took 37.024228504s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:34:47.574892  549120 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:34:47.574976  549120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:34:47.593903  549120 api_server.go:72] duration metric: took 41.18566449s to wait for apiserver process to appear ...
	I0314 18:34:47.593928  549120 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:34:47.593949  549120 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0314 18:34:47.602019  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:47.604475  549120 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0314 18:34:47.605907  549120 api_server.go:141] control plane version: v1.28.4
	I0314 18:34:47.605933  549120 api_server.go:131] duration metric: took 11.997548ms to wait for apiserver health ...
	I0314 18:34:47.605948  549120 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:34:47.611346  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:47.707974  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:47.781111  549120 system_pods.go:59] 17 kube-system pods found
	I0314 18:34:47.781144  549120 system_pods.go:61] "coredns-5dd5756b68-5t9fq" [5d8587e3-8eff-4c77-afe8-31153a73ffd9] Running
	I0314 18:34:47.781153  549120 system_pods.go:61] "csi-hostpath-attacher-0" [2bf3c922-34ad-4029-84f7-925f2204234b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0314 18:34:47.781161  549120 system_pods.go:61] "csi-hostpath-resizer-0" [4fe56f09-9408-4382-a30e-d30ca2fd4c37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0314 18:34:47.781169  549120 system_pods.go:61] "csi-hostpathplugin-l54bh" [f8e3d627-56b6-4fb3-8016-b12a4edb1082] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0314 18:34:47.781175  549120 system_pods.go:61] "etcd-addons-511560" [72669920-5a66-489a-a05f-f7002eb636c8] Running
	I0314 18:34:47.781181  549120 system_pods.go:61] "kube-apiserver-addons-511560" [c19d78f9-2d62-43e4-8e29-37638c9b5985] Running
	I0314 18:34:47.781185  549120 system_pods.go:61] "kube-controller-manager-addons-511560" [b88ad487-fc8a-4c04-a4db-0b94c82eae60] Running
	I0314 18:34:47.781204  549120 system_pods.go:61] "kube-ingress-dns-minikube" [f607b390-31d1-4707-91b2-335d6e7c3019] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0314 18:34:47.781209  549120 system_pods.go:61] "kube-proxy-f8mpx" [045f1e90-7186-4b52-84d1-15764ab92c9b] Running
	I0314 18:34:47.781213  549120 system_pods.go:61] "kube-scheduler-addons-511560" [37fc994f-f54c-4510-adf6-64db61564b5b] Running
	I0314 18:34:47.781218  549120 system_pods.go:61] "metrics-server-69cf46c98-v4k6z" [edf6e7c7-a19a-443d-abe6-43d7c38ef033] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 18:34:47.781223  549120 system_pods.go:61] "nvidia-device-plugin-daemonset-s5nv9" [84a64e11-098c-4556-b5de-76b2a3590dd1] Running
	I0314 18:34:47.781230  549120 system_pods.go:61] "registry-5r7v7" [6a23b094-28b6-478f-9c8d-f6ba7e0d8f45] Running
	I0314 18:34:47.781234  549120 system_pods.go:61] "registry-proxy-gxw9p" [07a23dc8-cdad-49d7-b3d7-652ef0b03f9c] Running
	I0314 18:34:47.781240  549120 system_pods.go:61] "snapshot-controller-58dbcc7b99-j67xq" [e0270048-1c08-4120-a5b9-7807eb46d1a8] Running
	I0314 18:34:47.781244  549120 system_pods.go:61] "snapshot-controller-58dbcc7b99-x2g2s" [b1af1274-a95b-4fd0-b3a2-2a51506e1096] Running
	I0314 18:34:47.781248  549120 system_pods.go:61] "storage-provisioner" [548a0000-b8a9-4fd0-bf26-2a16288218cc] Running
	I0314 18:34:47.781257  549120 system_pods.go:74] duration metric: took 175.271899ms to wait for pod list to return data ...
	I0314 18:34:47.781266  549120 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:34:47.974749  549120 default_sa.go:45] found service account: "default"
	I0314 18:34:47.974776  549120 default_sa.go:55] duration metric: took 193.502656ms for default service account to be created ...
	I0314 18:34:47.974786  549120 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:34:48.101649  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:48.111816  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:48.182154  549120 system_pods.go:86] 17 kube-system pods found
	I0314 18:34:48.182235  549120 system_pods.go:89] "coredns-5dd5756b68-5t9fq" [5d8587e3-8eff-4c77-afe8-31153a73ffd9] Running
	I0314 18:34:48.182260  549120 system_pods.go:89] "csi-hostpath-attacher-0" [2bf3c922-34ad-4029-84f7-925f2204234b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0314 18:34:48.182286  549120 system_pods.go:89] "csi-hostpath-resizer-0" [4fe56f09-9408-4382-a30e-d30ca2fd4c37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0314 18:34:48.182329  549120 system_pods.go:89] "csi-hostpathplugin-l54bh" [f8e3d627-56b6-4fb3-8016-b12a4edb1082] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0314 18:34:48.182351  549120 system_pods.go:89] "etcd-addons-511560" [72669920-5a66-489a-a05f-f7002eb636c8] Running
	I0314 18:34:48.182388  549120 system_pods.go:89] "kube-apiserver-addons-511560" [c19d78f9-2d62-43e4-8e29-37638c9b5985] Running
	I0314 18:34:48.182414  549120 system_pods.go:89] "kube-controller-manager-addons-511560" [b88ad487-fc8a-4c04-a4db-0b94c82eae60] Running
	I0314 18:34:48.182443  549120 system_pods.go:89] "kube-ingress-dns-minikube" [f607b390-31d1-4707-91b2-335d6e7c3019] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0314 18:34:48.182480  549120 system_pods.go:89] "kube-proxy-f8mpx" [045f1e90-7186-4b52-84d1-15764ab92c9b] Running
	I0314 18:34:48.182503  549120 system_pods.go:89] "kube-scheduler-addons-511560" [37fc994f-f54c-4510-adf6-64db61564b5b] Running
	I0314 18:34:48.182528  549120 system_pods.go:89] "metrics-server-69cf46c98-v4k6z" [edf6e7c7-a19a-443d-abe6-43d7c38ef033] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 18:34:48.182562  549120 system_pods.go:89] "nvidia-device-plugin-daemonset-s5nv9" [84a64e11-098c-4556-b5de-76b2a3590dd1] Running
	I0314 18:34:48.182587  549120 system_pods.go:89] "registry-5r7v7" [6a23b094-28b6-478f-9c8d-f6ba7e0d8f45] Running
	I0314 18:34:48.182610  549120 system_pods.go:89] "registry-proxy-gxw9p" [07a23dc8-cdad-49d7-b3d7-652ef0b03f9c] Running
	I0314 18:34:48.182648  549120 system_pods.go:89] "snapshot-controller-58dbcc7b99-j67xq" [e0270048-1c08-4120-a5b9-7807eb46d1a8] Running
	I0314 18:34:48.182674  549120 system_pods.go:89] "snapshot-controller-58dbcc7b99-x2g2s" [b1af1274-a95b-4fd0-b3a2-2a51506e1096] Running
	I0314 18:34:48.182698  549120 system_pods.go:89] "storage-provisioner" [548a0000-b8a9-4fd0-bf26-2a16288218cc] Running
	I0314 18:34:48.182741  549120 system_pods.go:126] duration metric: took 207.94416ms to wait for k8s-apps to be running ...
	I0314 18:34:48.182768  549120 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:34:48.182865  549120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:34:48.197098  549120 system_svc.go:56] duration metric: took 14.32212ms WaitForService to wait for kubelet
	I0314 18:34:48.197172  549120 kubeadm.go:576] duration metric: took 41.788936849s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:34:48.197226  549120 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:34:48.208973  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:48.375297  549120 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0314 18:34:48.375334  549120 node_conditions.go:123] node cpu capacity is 2
	I0314 18:34:48.375346  549120 node_conditions.go:105] duration metric: took 178.101601ms to run NodePressure ...
	I0314 18:34:48.375359  549120 start.go:240] waiting for startup goroutines ...
	I0314 18:34:48.601000  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:48.613273  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:48.707993  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:49.100520  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:49.112092  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:49.208963  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:49.600500  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:49.613210  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:49.719038  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:50.103456  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:50.114936  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:50.209247  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:50.601751  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:50.612730  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:50.709612  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:51.101309  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:51.112741  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:51.209344  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:51.602847  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:51.613750  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:51.709859  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:52.115580  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:52.123738  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:52.211071  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:52.614925  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:52.617976  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:52.708989  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:53.100633  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:53.111746  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:53.208481  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:53.601649  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:53.612144  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:53.708809  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:54.100944  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:54.111184  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:54.209281  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:54.604846  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:54.612934  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:54.708887  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:55.102261  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:55.112084  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:55.209012  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:55.606182  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:55.613293  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:55.707939  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:56.102271  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:56.114024  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:56.209518  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:56.602011  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:56.612650  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:56.709233  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:57.102541  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:57.112382  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:57.208376  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:57.601407  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:57.611833  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:57.710298  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:58.101722  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:58.114018  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:58.208660  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:58.601992  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:58.611747  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:58.708311  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:59.101471  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:59.111673  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:59.209902  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:34:59.601558  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:34:59.611881  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:34:59.713761  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:00.126532  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:00.139513  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:00.216397  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:00.602236  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:00.612134  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:00.709743  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:01.102832  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:01.111496  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:01.218374  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:01.601310  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:01.611801  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:01.708229  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:02.101205  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:02.112766  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:02.208369  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:02.603193  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:02.611714  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:02.708636  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:03.101278  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:03.112065  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:03.209409  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:03.600970  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:03.611223  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:03.708488  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:04.101887  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:04.118401  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:04.208545  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:04.601299  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:04.611843  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:04.708434  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:05.102782  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:05.112080  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:05.209225  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:05.600611  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:35:05.611878  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:05.708536  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:06.101338  549120 kapi.go:107] duration metric: took 47.510420863s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0314 18:35:06.112236  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:06.208421  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:06.611185  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:06.708115  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:07.111958  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:07.208862  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:07.612137  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:07.708891  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:08.112008  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:08.208897  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:08.612265  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:08.708743  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:09.111959  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:09.208450  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:09.611873  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:09.708274  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:10.112278  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:10.208769  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:10.611625  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:10.708508  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:11.111566  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:11.208451  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:11.611906  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:11.708752  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:12.112106  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:12.209065  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:12.611886  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:12.708629  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:13.111761  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:13.208526  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:13.611471  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:13.708310  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:14.111322  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:14.207861  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:14.611610  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:14.709156  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:15.112573  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:15.212427  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:15.612374  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:15.707920  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:16.111673  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:16.208704  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:16.612292  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:16.708233  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:17.111900  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:17.208536  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:17.612396  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:17.708164  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:18.112039  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:18.208750  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:18.611907  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:18.708545  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:19.112257  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:19.208712  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:19.611382  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:19.707981  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:20.111907  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:20.208967  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:20.612124  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:20.716970  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:21.111887  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:21.208775  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:21.613162  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:21.708272  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:22.111999  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:22.208629  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:22.611234  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:22.708941  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:23.112277  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:23.209669  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:23.612035  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:23.709075  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:24.112345  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:24.208024  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:24.611866  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:24.708277  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:25.112604  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:25.208582  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:25.612287  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:25.708161  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:26.111900  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:26.209300  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:26.611784  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:26.708562  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:27.111446  549120 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:35:27.208095  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:27.612754  549120 kapi.go:107] duration metric: took 1m11.0072235s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0314 18:35:27.708877  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:28.208924  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:28.709068  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:29.208710  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:29.711944  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:30.209108  549120 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:35:30.708243  549120 kapi.go:107] duration metric: took 1m10.503854887s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0314 18:35:30.710303  549120 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-511560 cluster.
	I0314 18:35:30.713122  549120 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0314 18:35:30.715523  549120 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0314 18:35:30.718034  549120 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0314 18:35:30.720204  549120 addons.go:505] duration metric: took 1m24.311703351s for enable addons: enabled=[cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0314 18:35:30.720258  549120 start.go:245] waiting for cluster config update ...
	I0314 18:35:30.720281  549120 start.go:254] writing updated cluster config ...
	I0314 18:35:30.720600  549120 ssh_runner.go:195] Run: rm -f paused
	I0314 18:35:31.064316  549120 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 18:35:31.066276  549120 out.go:177] * Done! kubectl is now configured to use "addons-511560" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 14 18:36:09 addons-511560 dockerd[1129]: time="2024-03-14T18:36:09.899652367Z" level=info msg="ignoring event" container=b03b5576ff06cb23a21af1afa4be17524cabcf86c9703c6711570ac248aa1c34 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:09 addons-511560 dockerd[1129]: time="2024-03-14T18:36:09.995448066Z" level=info msg="ignoring event" container=6ea02aa6dd5ffeaa85e4cffa96f1a81011d02435e53bc1490fbcd18a8c0351e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:17 addons-511560 cri-dockerd[1340]: time="2024-03-14T18:36:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3e789bce829239469bfcd1a75bf6347e1c5ca6083effb23860f0401f00363267/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Mar 14 18:36:19 addons-511560 cri-dockerd[1340]: time="2024-03-14T18:36:19Z" level=info msg="Stop pulling image gcr.io/google-samples/hello-app:1.0: Status: Downloaded newer image for gcr.io/google-samples/hello-app:1.0"
	Mar 14 18:36:20 addons-511560 dockerd[1129]: time="2024-03-14T18:36:20.160653350Z" level=info msg="ignoring event" container=13dfd70bdc2e9a07fcc1fd60f1fa7a5bbfac7481b9e94c1dab38efbcf4512e99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:21 addons-511560 dockerd[1129]: time="2024-03-14T18:36:21.272089432Z" level=info msg="ignoring event" container=c1f53da80ab5c231061a4821419935283c562ba370d9b98038f1ffaaba44c729 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:21 addons-511560 dockerd[1129]: time="2024-03-14T18:36:21.341905463Z" level=info msg="ignoring event" container=7f416f11bf74054b94129b08bc0079c6e1af93ca69fae2c0e2a6ae4d75dd1f45 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:21 addons-511560 dockerd[1129]: time="2024-03-14T18:36:21.479690670Z" level=info msg="ignoring event" container=2cb998aaa0cbee809e74fa1bb72bde9d5ef3347461da043e27534ea431c24b8b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:22 addons-511560 cri-dockerd[1340]: time="2024-03-14T18:36:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d5c7f4e83c8f645ac6f646a24b3c30ee7ac3df4402ce37ebe2d6e6cec2082e26/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Mar 14 18:36:22 addons-511560 dockerd[1129]: time="2024-03-14T18:36:22.392269929Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" spanID=1a8a8a8cfea34510 traceID=c98d5db386fd9dd207729f1e1aae4104
	Mar 14 18:36:22 addons-511560 cri-dockerd[1340]: time="2024-03-14T18:36:22Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Mar 14 18:36:23 addons-511560 dockerd[1129]: time="2024-03-14T18:36:23.093405332Z" level=info msg="ignoring event" container=897415a078f2efa1abe4d2c92c690245894d75fe77a8635e4f39eb996a737959 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:24 addons-511560 dockerd[1129]: time="2024-03-14T18:36:24.281001663Z" level=info msg="ignoring event" container=d5c7f4e83c8f645ac6f646a24b3c30ee7ac3df4402ce37ebe2d6e6cec2082e26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:26 addons-511560 cri-dockerd[1340]: time="2024-03-14T18:36:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a890262e1ea9698e736e8cf8cd1b6e5d8d995d393d81187a6e24a3d729e8a5f9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Mar 14 18:36:27 addons-511560 cri-dockerd[1340]: time="2024-03-14T18:36:27Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Mar 14 18:36:27 addons-511560 dockerd[1129]: time="2024-03-14T18:36:27.271148766Z" level=info msg="ignoring event" container=cbdea600f16b4b3ac4235d3cc9a1db2bf5314d51f1cb75740ee6c4bf95c021ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:29 addons-511560 dockerd[1129]: time="2024-03-14T18:36:29.418700355Z" level=info msg="ignoring event" container=a890262e1ea9698e736e8cf8cd1b6e5d8d995d393d81187a6e24a3d729e8a5f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:30 addons-511560 cri-dockerd[1340]: time="2024-03-14T18:36:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4514c4cf93c9732d6c27770d7c3d9be98c3564af6988ee23c8ca4af89dd71a19/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Mar 14 18:36:31 addons-511560 dockerd[1129]: time="2024-03-14T18:36:31.215934264Z" level=info msg="ignoring event" container=70138e24f87a4a21cff50c4b336a13f81f41f2497cc8a8bc41d719812e768b28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:32 addons-511560 dockerd[1129]: time="2024-03-14T18:36:32.495388287Z" level=info msg="ignoring event" container=4514c4cf93c9732d6c27770d7c3d9be98c3564af6988ee23c8ca4af89dd71a19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:33 addons-511560 dockerd[1129]: time="2024-03-14T18:36:33.362286660Z" level=info msg="ignoring event" container=74e6dccd6b0e2491d0de458bb6e6a15d896721041ce1a823f8f08b7def3af2bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:34 addons-511560 dockerd[1129]: time="2024-03-14T18:36:34.633728284Z" level=info msg="ignoring event" container=35a295142b4be2d83a4104f9eaff1783d2a162567f9b2a9c050b2179016b83e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:37 addons-511560 dockerd[1129]: time="2024-03-14T18:36:37.504795870Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=62cc25f85af16fc40d9a5137080540181d298e54961d3acccf42185c5b95ea94
	Mar 14 18:36:37 addons-511560 dockerd[1129]: time="2024-03-14T18:36:37.564056716Z" level=info msg="ignoring event" container=62cc25f85af16fc40d9a5137080540181d298e54961d3acccf42185c5b95ea94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Mar 14 18:36:37 addons-511560 dockerd[1129]: time="2024-03-14T18:36:37.675935264Z" level=info msg="ignoring event" container=55c358721a3b403c2eda0028eaf4dfb4c384482180efbeb48efe444ba2dcb127 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	35a295142b4be       dd1b12fcb6097                                                                                                                8 seconds ago        Exited              hello-world-app           2                   3e789bce82923       hello-world-app-5d77478584-pqwnc
	70138e24f87a4       fc9db2894f4e4                                                                                                                11 seconds ago       Exited              helper-pod                0                   4514c4cf93c97       helper-pod-delete-pvc-e2c10d25-b178-46d7-b7e9-3f699f3ef4aa
	cbdea600f16b4       busybox@sha256:650fd573e056b679a5110a70aabeb01e26b76e545ec4b9c70a9523f2dfaf18c6                                              15 seconds ago       Exited              busybox                   0                   a890262e1ea96       test-local-path
	897415a078f2e       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              20 seconds ago       Exited              helper-pod                0                   d5c7f4e83c8f6       helper-pod-create-pvc-e2c10d25-b178-46d7-b7e9-3f699f3ef4aa
	9a4088a306d82       nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                                                33 seconds ago       Running             nginx                     0                   2ba0cd6ec22c4       nginx
	ab2c52db8a336       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 About a minute ago   Running             gcp-auth                  0                   e47527292ef74       gcp-auth-7d69788767-nthb5
	f407e20126c40       1a024e390dd05                                                                                                                About a minute ago   Exited              patch                     1                   9278535d8e80f       ingress-nginx-admission-patch-pp6s7
	9aa5cad264290       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:44d1d0e9f19c63f58b380c5fddaca7cf22c7cee564adeff365225a5df5ef3334   About a minute ago   Exited              create                    0                   18d711d16bb6e       ingress-nginx-admission-create-b67f9
	44eace5686c89       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        About a minute ago   Running             yakd                      0                   6de1332059088       yakd-dashboard-9947fc6bf-sv57l
	8729e058f0a13       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner    0                   3b335eb16c0bb       local-path-provisioner-78b46b4d5c-99tzv
	ee79db098fc5a       gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15               2 minutes ago        Running             cloud-spanner-emulator    0                   92dab27209f1e       cloud-spanner-emulator-6548d5df46-8pjls
	06ddf561c9ba3       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner       0                   cc6b94d46a8dc       storage-provisioner
	f4759b1dc30f5       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                   0                   59960bfc97b17       coredns-5dd5756b68-5t9fq
	2dfb2ed00122f       3ca3ca488cf13                                                                                                                2 minutes ago        Running             kube-proxy                0                   5bea91bd0b3bd       kube-proxy-f8mpx
	e2a896ca85afd       9961cbceaf234                                                                                                                2 minutes ago        Running             kube-controller-manager   0                   1c8a25ce35d08       kube-controller-manager-addons-511560
	62629eb8f02de       05c284c929889                                                                                                                2 minutes ago        Running             kube-scheduler            0                   3bffa6e8679d8       kube-scheduler-addons-511560
	9db6e59b9fcaa       9cdd6470f48c8                                                                                                                2 minutes ago        Running             etcd                      0                   f1903ec55aaec       etcd-addons-511560
	f59dc13a212f9       04b4c447bb9d4                                                                                                                2 minutes ago        Running             kube-apiserver            0                   d8f0110bbf17c       kube-apiserver-addons-511560
	
	
	==> coredns [f4759b1dc30f] <==
	[INFO] 10.244.0.20:49855 - 30624 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000122379s
	[INFO] 10.244.0.20:49855 - 17560 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002190116s
	[INFO] 10.244.0.20:36142 - 44543 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0037319s
	[INFO] 10.244.0.20:49855 - 48967 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002925554s
	[INFO] 10.244.0.20:36142 - 4472 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001704564s
	[INFO] 10.244.0.20:36142 - 29888 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000153279s
	[INFO] 10.244.0.20:49855 - 57504 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000086268s
	[INFO] 10.244.0.20:51114 - 24472 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000131962s
	[INFO] 10.244.0.20:51114 - 6887 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00011995s
	[INFO] 10.244.0.20:55458 - 63111 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00006144s
	[INFO] 10.244.0.20:51114 - 30263 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006157s
	[INFO] 10.244.0.20:55458 - 23472 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040657s
	[INFO] 10.244.0.20:51114 - 13753 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055721s
	[INFO] 10.244.0.20:55458 - 23994 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040213s
	[INFO] 10.244.0.20:51114 - 60868 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083437s
	[INFO] 10.244.0.20:55458 - 62680 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039746s
	[INFO] 10.244.0.20:51114 - 16264 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070572s
	[INFO] 10.244.0.20:55458 - 60727 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082051s
	[INFO] 10.244.0.20:55458 - 39978 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000162888s
	[INFO] 10.244.0.20:55458 - 58883 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00130062s
	[INFO] 10.244.0.20:51114 - 15750 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001895636s
	[INFO] 10.244.0.20:51114 - 5277 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001871686s
	[INFO] 10.244.0.20:55458 - 49994 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001045819s
	[INFO] 10.244.0.20:55458 - 31133 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074929s
	[INFO] 10.244.0.20:51114 - 924 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000055006s
	
	
	==> describe nodes <==
	Name:               addons-511560
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-511560
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=addons-511560
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_33_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-511560
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:33:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-511560
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:36:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:36:27 +0000   Thu, 14 Mar 2024 18:33:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:36:27 +0000   Thu, 14 Mar 2024 18:33:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:36:27 +0000   Thu, 14 Mar 2024 18:33:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:36:27 +0000   Thu, 14 Mar 2024 18:34:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-511560
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 5160eaef79db48c8b6bbc850c4adcea8
	  System UUID:                48ca3126-81e4-4e7b-ac8d-2add4550eb90
	  Boot ID:                    82438414-92b7-424c-b6a1-17a6c30d7d8a
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-8pjls    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  default                     hello-world-app-5d77478584-pqwnc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-7d69788767-nthb5                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 coredns-5dd5756b68-5t9fq                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m36s
	  kube-system                 etcd-addons-511560                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m50s
	  kube-system                 kube-apiserver-addons-511560               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 kube-controller-manager-addons-511560      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 kube-proxy-f8mpx                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-scheduler-addons-511560               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m29s
	  local-path-storage          local-path-provisioner-78b46b4d5c-99tzv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-sv57l             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (3%!)(MISSING)  426Mi (5%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m34s                  kube-proxy       
	  Normal  Starting                 2m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m57s (x8 over 2m57s)  kubelet          Node addons-511560 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m57s (x8 over 2m57s)  kubelet          Node addons-511560 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s (x7 over 2m57s)  kubelet          Node addons-511560 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m49s                  kubelet          Node addons-511560 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m49s                  kubelet          Node addons-511560 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m49s                  kubelet          Node addons-511560 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m49s                  kubelet          Node addons-511560 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m39s                  kubelet          Node addons-511560 status is now: NodeReady
	  Normal  RegisteredNode           2m38s                  node-controller  Node addons-511560 event: Registered Node addons-511560 in Controller
	
	
	==> dmesg <==
	[  +0.000771] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.001093] FS-Cache: N-cookie d=0000000055f1b311{9p.inode} n=0000000065cf8b29
	[  +0.001132] FS-Cache: N-key=[8] 'e23a5c0100000000'
	[  +0.002718] FS-Cache: Duplicate cookie detected
	[  +0.000789] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001152] FS-Cache: O-cookie d=0000000055f1b311{9p.inode} n=0000000012debf34
	[  +0.001113] FS-Cache: O-key=[8] 'e23a5c0100000000'
	[  +0.000728] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000961] FS-Cache: N-cookie d=0000000055f1b311{9p.inode} n=0000000016cbe2d0
	[  +0.001220] FS-Cache: N-key=[8] 'e23a5c0100000000'
	[  +2.315763] FS-Cache: Duplicate cookie detected
	[  +0.000794] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001017] FS-Cache: O-cookie d=0000000055f1b311{9p.inode} n=00000000d0a045e2
	[  +0.001069] FS-Cache: O-key=[8] 'e13a5c0100000000'
	[  +0.000783] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000922] FS-Cache: N-cookie d=0000000055f1b311{9p.inode} n=0000000065cf8b29
	[  +0.001059] FS-Cache: N-key=[8] 'e13a5c0100000000'
	[  +0.286664] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000964] FS-Cache: O-cookie d=0000000055f1b311{9p.inode} n=00000000e73ac8e1
	[  +0.001060] FS-Cache: O-key=[8] 'e73a5c0100000000'
	[  +0.000834] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000925] FS-Cache: N-cookie d=0000000055f1b311{9p.inode} n=00000000d36177ac
	[  +0.001222] FS-Cache: N-key=[8] 'e73a5c0100000000'
	[Mar14 18:07] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [9db6e59b9fca] <==
	{"level":"info","ts":"2024-03-14T18:33:46.477749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-03-14T18:33:46.477868Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-03-14T18:33:46.477981Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T18:33:46.478394Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-14T18:33:46.47841Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-03-14T18:33:46.478708Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T18:33:46.478737Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T18:33:47.065451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-14T18:33:47.065498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-14T18:33:47.065527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-03-14T18:33:47.065544Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-03-14T18:33:47.065589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-14T18:33:47.065622Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-03-14T18:33:47.065662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-03-14T18:33:47.073598Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-511560 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T18:33:47.073783Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:33:47.07487Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T18:33:47.075057Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:33:47.076679Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-03-14T18:33:47.081867Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:33:47.082177Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T18:33:47.082275Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T18:33:47.093491Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:33:47.093766Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:33:47.093886Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [ab2c52db8a33] <==
	2024/03/14 18:35:29 GCP Auth Webhook started!
	2024/03/14 18:35:33 Ready to marshal response ...
	2024/03/14 18:35:33 Ready to write response ...
	2024/03/14 18:35:42 Ready to marshal response ...
	2024/03/14 18:35:42 Ready to write response ...
	2024/03/14 18:35:52 Ready to marshal response ...
	2024/03/14 18:35:52 Ready to write response ...
	2024/03/14 18:36:06 Ready to marshal response ...
	2024/03/14 18:36:06 Ready to write response ...
	2024/03/14 18:36:17 Ready to marshal response ...
	2024/03/14 18:36:17 Ready to write response ...
	2024/03/14 18:36:21 Ready to marshal response ...
	2024/03/14 18:36:21 Ready to write response ...
	2024/03/14 18:36:21 Ready to marshal response ...
	2024/03/14 18:36:21 Ready to write response ...
	2024/03/14 18:36:30 Ready to marshal response ...
	2024/03/14 18:36:30 Ready to write response ...
	
	
	==> kernel <==
	 18:36:42 up  3:19,  0 users,  load average: 2.21, 2.83, 2.79
	Linux addons-511560 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [f59dc13a212f] <==
	W0314 18:36:01.964461       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0314 18:36:06.514824       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0314 18:36:06.824494       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.239.92"}
	I0314 18:36:09.338535       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:36:09.338581       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:36:09.346730       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:36:09.346783       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:36:09.357049       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:36:09.357095       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:36:09.379312       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:36:09.379362       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:36:09.400803       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:36:09.401407       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:36:09.426251       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:36:09.428670       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:36:09.471537       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:36:09.471587       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:36:09.489843       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:36:09.490065       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0314 18:36:10.357910       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0314 18:36:10.477256       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0314 18:36:10.512162       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0314 18:36:17.598383       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.248.117"}
	E0314 18:36:34.545596       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0314 18:36:36.363961       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [e2a896ca85af] <==
	E0314 18:36:20.236205       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0314 18:36:20.370459       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 18:36:20.370536       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0314 18:36:21.089774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.248µs"
	I0314 18:36:21.509742       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0314 18:36:21.705002       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0314 18:36:22.164312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="72.205µs"
	I0314 18:36:23.184129       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.741µs"
	W0314 18:36:28.501719       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 18:36:28.501768       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0314 18:36:30.044587       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 18:36:30.044719       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0314 18:36:31.112301       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="4.734µs"
	W0314 18:36:31.137559       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 18:36:31.137599       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0314 18:36:34.437751       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0314 18:36:34.453370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="4.874µs"
	I0314 18:36:34.459914       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0314 18:36:35.279270       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 18:36:35.279318       1 shared_informer.go:318] Caches are synced for resource quota
	I0314 18:36:35.667324       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="75.167µs"
	I0314 18:36:35.753811       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0314 18:36:35.753860       1 shared_informer.go:318] Caches are synced for garbage collector
	W0314 18:36:38.687071       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 18:36:38.687106       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [2dfb2ed00122] <==
	I0314 18:34:07.741399       1 server_others.go:69] "Using iptables proxy"
	I0314 18:34:07.765731       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0314 18:34:07.808062       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0314 18:34:07.810123       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:34:07.810154       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0314 18:34:07.810163       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0314 18:34:07.810194       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:34:07.810405       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:34:07.810416       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:34:07.811620       1 config.go:188] "Starting service config controller"
	I0314 18:34:07.811631       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:34:07.811648       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:34:07.811652       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:34:07.811978       1 config.go:315] "Starting node config controller"
	I0314 18:34:07.811984       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:34:07.911880       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 18:34:07.911934       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:34:07.912172       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [62629eb8f02d] <==
	W0314 18:33:50.335474       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 18:33:50.335588       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 18:33:50.336924       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 18:33:50.337483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 18:33:50.337749       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 18:33:50.337850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 18:33:50.338056       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 18:33:50.338357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 18:33:50.338581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 18:33:50.338677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 18:33:50.339358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 18:33:50.339460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 18:33:50.339644       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 18:33:50.339737       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 18:33:50.339961       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 18:33:50.340319       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 18:33:50.340264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 18:33:50.340777       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 18:33:51.187822       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 18:33:51.188039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 18:33:51.292957       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 18:33:51.293006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 18:33:51.575017       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 18:33:51.575227       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 18:33:53.577587       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 18:36:32 addons-511560 kubelet[2471]: I0314 18:36:32.781983    2471 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3ef3332b-18c2-4c94-b305-89f5bc199bcb-data\") on node \"addons-511560\" DevicePath \"\""
	Mar 14 18:36:32 addons-511560 kubelet[2471]: I0314 18:36:32.782029    2471 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-plcdl\" (UniqueName: \"kubernetes.io/projected/3ef3332b-18c2-4c94-b305-89f5bc199bcb-kube-api-access-plcdl\") on node \"addons-511560\" DevicePath \"\""
	Mar 14 18:36:32 addons-511560 kubelet[2471]: I0314 18:36:32.782045    2471 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3ef3332b-18c2-4c94-b305-89f5bc199bcb-script\") on node \"addons-511560\" DevicePath \"\""
	Mar 14 18:36:32 addons-511560 kubelet[2471]: I0314 18:36:32.782057    2471 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3ef3332b-18c2-4c94-b305-89f5bc199bcb-gcp-creds\") on node \"addons-511560\" DevicePath \"\""
	Mar 14 18:36:33 addons-511560 kubelet[2471]: I0314 18:36:33.457333    2471 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4514c4cf93c9732d6c27770d7c3d9be98c3564af6988ee23c8ca4af89dd71a19"
	Mar 14 18:36:33 addons-511560 kubelet[2471]: I0314 18:36:33.475876    2471 scope.go:117] "RemoveContainer" containerID="b1c4d9a393044f1d68da82518e2df8df8028295ac8cd1e6442e10e1435b58a4b"
	Mar 14 18:36:33 addons-511560 kubelet[2471]: I0314 18:36:33.590840    2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-898fv\" (UniqueName: \"kubernetes.io/projected/f607b390-31d1-4707-91b2-335d6e7c3019-kube-api-access-898fv\") pod \"f607b390-31d1-4707-91b2-335d6e7c3019\" (UID: \"f607b390-31d1-4707-91b2-335d6e7c3019\") "
	Mar 14 18:36:33 addons-511560 kubelet[2471]: I0314 18:36:33.595102    2471 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f607b390-31d1-4707-91b2-335d6e7c3019-kube-api-access-898fv" (OuterVolumeSpecName: "kube-api-access-898fv") pod "f607b390-31d1-4707-91b2-335d6e7c3019" (UID: "f607b390-31d1-4707-91b2-335d6e7c3019"). InnerVolumeSpecName "kube-api-access-898fv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 14 18:36:33 addons-511560 kubelet[2471]: I0314 18:36:33.691619    2471 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-898fv\" (UniqueName: \"kubernetes.io/projected/f607b390-31d1-4707-91b2-335d6e7c3019-kube-api-access-898fv\") on node \"addons-511560\" DevicePath \"\""
	Mar 14 18:36:34 addons-511560 kubelet[2471]: I0314 18:36:34.430400    2471 scope.go:117] "RemoveContainer" containerID="c1f53da80ab5c231061a4821419935283c562ba370d9b98038f1ffaaba44c729"
	Mar 14 18:36:35 addons-511560 kubelet[2471]: I0314 18:36:35.439480    2471 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1d4110ec-713d-446f-a840-874caa6b68e8" path="/var/lib/kubelet/pods/1d4110ec-713d-446f-a840-874caa6b68e8/volumes"
	Mar 14 18:36:35 addons-511560 kubelet[2471]: I0314 18:36:35.439985    2471 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="adc153bb-c2b6-4d3b-afaa-1312d127c02d" path="/var/lib/kubelet/pods/adc153bb-c2b6-4d3b-afaa-1312d127c02d/volumes"
	Mar 14 18:36:35 addons-511560 kubelet[2471]: I0314 18:36:35.440406    2471 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f607b390-31d1-4707-91b2-335d6e7c3019" path="/var/lib/kubelet/pods/f607b390-31d1-4707-91b2-335d6e7c3019/volumes"
	Mar 14 18:36:35 addons-511560 kubelet[2471]: I0314 18:36:35.649804    2471 scope.go:117] "RemoveContainer" containerID="c1f53da80ab5c231061a4821419935283c562ba370d9b98038f1ffaaba44c729"
	Mar 14 18:36:35 addons-511560 kubelet[2471]: I0314 18:36:35.650293    2471 scope.go:117] "RemoveContainer" containerID="35a295142b4be2d83a4104f9eaff1783d2a162567f9b2a9c050b2179016b83e8"
	Mar 14 18:36:35 addons-511560 kubelet[2471]: E0314 18:36:35.650767    2471 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-pqwnc_default(d281523f-8195-4d54-9d60-61f6234cae04)\"" pod="default/hello-world-app-5d77478584-pqwnc" podUID="d281523f-8195-4d54-9d60-61f6234cae04"
	Mar 14 18:36:37 addons-511560 kubelet[2471]: I0314 18:36:37.443309    2471 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3ef3332b-18c2-4c94-b305-89f5bc199bcb" path="/var/lib/kubelet/pods/3ef3332b-18c2-4c94-b305-89f5bc199bcb/volumes"
	Mar 14 18:36:37 addons-511560 kubelet[2471]: I0314 18:36:37.925396    2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8v8b\" (UniqueName: \"kubernetes.io/projected/83e2d6a8-ddf1-4067-8aba-20fa5c51c236-kube-api-access-g8v8b\") pod \"83e2d6a8-ddf1-4067-8aba-20fa5c51c236\" (UID: \"83e2d6a8-ddf1-4067-8aba-20fa5c51c236\") "
	Mar 14 18:36:37 addons-511560 kubelet[2471]: I0314 18:36:37.925483    2471 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/83e2d6a8-ddf1-4067-8aba-20fa5c51c236-webhook-cert\") pod \"83e2d6a8-ddf1-4067-8aba-20fa5c51c236\" (UID: \"83e2d6a8-ddf1-4067-8aba-20fa5c51c236\") "
	Mar 14 18:36:37 addons-511560 kubelet[2471]: I0314 18:36:37.930115    2471 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83e2d6a8-ddf1-4067-8aba-20fa5c51c236-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "83e2d6a8-ddf1-4067-8aba-20fa5c51c236" (UID: "83e2d6a8-ddf1-4067-8aba-20fa5c51c236"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 14 18:36:37 addons-511560 kubelet[2471]: I0314 18:36:37.931674    2471 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83e2d6a8-ddf1-4067-8aba-20fa5c51c236-kube-api-access-g8v8b" (OuterVolumeSpecName: "kube-api-access-g8v8b") pod "83e2d6a8-ddf1-4067-8aba-20fa5c51c236" (UID: "83e2d6a8-ddf1-4067-8aba-20fa5c51c236"). InnerVolumeSpecName "kube-api-access-g8v8b". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 14 18:36:38 addons-511560 kubelet[2471]: I0314 18:36:38.026751    2471 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g8v8b\" (UniqueName: \"kubernetes.io/projected/83e2d6a8-ddf1-4067-8aba-20fa5c51c236-kube-api-access-g8v8b\") on node \"addons-511560\" DevicePath \"\""
	Mar 14 18:36:38 addons-511560 kubelet[2471]: I0314 18:36:38.026803    2471 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/83e2d6a8-ddf1-4067-8aba-20fa5c51c236-webhook-cert\") on node \"addons-511560\" DevicePath \"\""
	Mar 14 18:36:38 addons-511560 kubelet[2471]: I0314 18:36:38.745981    2471 scope.go:117] "RemoveContainer" containerID="62cc25f85af16fc40d9a5137080540181d298e54961d3acccf42185c5b95ea94"
	Mar 14 18:36:39 addons-511560 kubelet[2471]: I0314 18:36:39.443071    2471 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="83e2d6a8-ddf1-4067-8aba-20fa5c51c236" path="/var/lib/kubelet/pods/83e2d6a8-ddf1-4067-8aba-20fa5c51c236/volumes"
	
	
	==> storage-provisioner [06ddf561c9ba] <==
	I0314 18:34:14.935250       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 18:34:14.960184       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 18:34:14.960236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 18:34:14.979752       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 18:34:14.979926       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-511560_188a4227-12a3-42ab-85de-48f415b51948!
	I0314 18:34:14.980771       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fb423941-5cdb-4a4b-bd5a-b759629e2fac", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-511560_188a4227-12a3-42ab-85de-48f415b51948 became leader
	I0314 18:34:15.080847       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-511560_188a4227-12a3-42ab-85de-48f415b51948!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-511560 -n addons-511560
helpers_test.go:261: (dbg) Run:  kubectl --context addons-511560 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.49s)

                                                
                                    
x
+
TestScheduledStopUnix (35.3s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-364114 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-364114 --memory=2048 --driver=docker  --container-runtime=docker: (30.634968047s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-364114 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-364114 -n scheduled-stop-364114
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-364114 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 749324 running but should have been killed on reschedule of stop
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-14 19:09:09.644582722 +0000 UTC m=+2208.537364193
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-364114
helpers_test.go:235: (dbg) docker inspect scheduled-stop-364114:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "af811371dc249eacaed2ce7b8fdf1fca1ae47632da2d189aa928b4de04fd1b7c",
	        "Created": "2024-03-14T19:08:43.57099707Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 746425,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-03-14T19:08:43.86510565Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:db62270b4bb0cfcde696782f7a6322baca275275e31814ce9fd8998407bf461e",
	        "ResolvConfPath": "/var/lib/docker/containers/af811371dc249eacaed2ce7b8fdf1fca1ae47632da2d189aa928b4de04fd1b7c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/af811371dc249eacaed2ce7b8fdf1fca1ae47632da2d189aa928b4de04fd1b7c/hostname",
	        "HostsPath": "/var/lib/docker/containers/af811371dc249eacaed2ce7b8fdf1fca1ae47632da2d189aa928b4de04fd1b7c/hosts",
	        "LogPath": "/var/lib/docker/containers/af811371dc249eacaed2ce7b8fdf1fca1ae47632da2d189aa928b4de04fd1b7c/af811371dc249eacaed2ce7b8fdf1fca1ae47632da2d189aa928b4de04fd1b7c-json.log",
	        "Name": "/scheduled-stop-364114",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-364114:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-364114",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e40e31af0d4e0861f238f3470347a9c699892f99639065a1f091693abd9cf0d-init/diff:/var/lib/docker/overlay2/5d0772f9548c62b17706c652675b28e51ca47810b015447035374bcde04cf033/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e40e31af0d4e0861f238f3470347a9c699892f99639065a1f091693abd9cf0d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e40e31af0d4e0861f238f3470347a9c699892f99639065a1f091693abd9cf0d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e40e31af0d4e0861f238f3470347a9c699892f99639065a1f091693abd9cf0d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-364114",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-364114/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-364114",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-364114",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-364114",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bc21492f341fb02e6e97e9a431ad321be69d72559461094efcd15ba7de7ca534",
	            "SandboxKey": "/var/run/docker/netns/bc21492f341f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33709"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33708"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33705"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33707"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33706"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-364114": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "af811371dc24",
	                        "scheduled-stop-364114"
	                    ],
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "NetworkID": "b61a69f34d9a6bf0cbb70533a85cbbfa4f2450e5a2827c35a66a9a969573104c",
	                    "EndpointID": "9414cc61dec312e9959c16726b646f34bf87e655a825b9b4d12bac5978d9fe93",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "scheduled-stop-364114",
	                        "af811371dc24"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-364114 -n scheduled-stop-364114
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-364114 logs -n 25
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p multinode-199663            | multinode-199663      | jenkins | v1.32.0 | 14 Mar 24 19:03 UTC | 14 Mar 24 19:03 UTC |
	| start   | -p multinode-199663            | multinode-199663      | jenkins | v1.32.0 | 14 Mar 24 19:03 UTC | 14 Mar 24 19:04 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-199663       | multinode-199663      | jenkins | v1.32.0 | 14 Mar 24 19:04 UTC |                     |
	| node    | multinode-199663 node delete   | multinode-199663      | jenkins | v1.32.0 | 14 Mar 24 19:04 UTC | 14 Mar 24 19:04 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-199663 stop          | multinode-199663      | jenkins | v1.32.0 | 14 Mar 24 19:04 UTC | 14 Mar 24 19:04 UTC |
	| start   | -p multinode-199663            | multinode-199663      | jenkins | v1.32.0 | 14 Mar 24 19:04 UTC | 14 Mar 24 19:05 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| node    | list -p multinode-199663       | multinode-199663      | jenkins | v1.32.0 | 14 Mar 24 19:05 UTC |                     |
	| start   | -p multinode-199663-m02        | multinode-199663-m02  | jenkins | v1.32.0 | 14 Mar 24 19:05 UTC |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| start   | -p multinode-199663-m03        | multinode-199663-m03  | jenkins | v1.32.0 | 14 Mar 24 19:05 UTC | 14 Mar 24 19:05 UTC |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| node    | add -p multinode-199663        | multinode-199663      | jenkins | v1.32.0 | 14 Mar 24 19:05 UTC |                     |
	| delete  | -p multinode-199663-m03        | multinode-199663-m03  | jenkins | v1.32.0 | 14 Mar 24 19:05 UTC | 14 Mar 24 19:05 UTC |
	| delete  | -p multinode-199663            | multinode-199663      | jenkins | v1.32.0 | 14 Mar 24 19:05 UTC | 14 Mar 24 19:06 UTC |
	| start   | -p test-preload-835643         | test-preload-835643   | jenkins | v1.32.0 | 14 Mar 24 19:06 UTC | 14 Mar 24 19:07 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --wait=true --preload=false    |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-835643 image pull | test-preload-835643   | jenkins | v1.32.0 | 14 Mar 24 19:07 UTC | 14 Mar 24 19:07 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-835643         | test-preload-835643   | jenkins | v1.32.0 | 14 Mar 24 19:07 UTC | 14 Mar 24 19:08 UTC |
	| start   | -p test-preload-835643         | test-preload-835643   | jenkins | v1.32.0 | 14 Mar 24 19:08 UTC | 14 Mar 24 19:08 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| image   | test-preload-835643 image list | test-preload-835643   | jenkins | v1.32.0 | 14 Mar 24 19:08 UTC | 14 Mar 24 19:08 UTC |
	| delete  | -p test-preload-835643         | test-preload-835643   | jenkins | v1.32.0 | 14 Mar 24 19:08 UTC | 14 Mar 24 19:08 UTC |
	| start   | -p scheduled-stop-364114       | scheduled-stop-364114 | jenkins | v1.32.0 | 14 Mar 24 19:08 UTC | 14 Mar 24 19:09 UTC |
	|         | --memory=2048 --driver=docker  |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-364114       | scheduled-stop-364114 | jenkins | v1.32.0 | 14 Mar 24 19:09 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-364114       | scheduled-stop-364114 | jenkins | v1.32.0 | 14 Mar 24 19:09 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-364114       | scheduled-stop-364114 | jenkins | v1.32.0 | 14 Mar 24 19:09 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-364114       | scheduled-stop-364114 | jenkins | v1.32.0 | 14 Mar 24 19:09 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-364114       | scheduled-stop-364114 | jenkins | v1.32.0 | 14 Mar 24 19:09 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-364114       | scheduled-stop-364114 | jenkins | v1.32.0 | 14 Mar 24 19:09 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:08:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:08:38.486459  745981 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:08:38.486594  745981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:08:38.486598  745981 out.go:304] Setting ErrFile to fd 2...
	I0314 19:08:38.486601  745981 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:08:38.486854  745981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
	I0314 19:08:38.487276  745981 out.go:298] Setting JSON to false
	I0314 19:08:38.488144  745981 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13862,"bootTime":1710429457,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 19:08:38.488205  745981 start.go:139] virtualization:  
	I0314 19:08:38.491086  745981 out.go:177] * [scheduled-stop-364114] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 19:08:38.493869  745981 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:08:38.495924  745981 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:08:38.493954  745981 notify.go:220] Checking for updates...
	I0314 19:08:38.499522  745981 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	I0314 19:08:38.501567  745981 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	I0314 19:08:38.504485  745981 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0314 19:08:38.506888  745981 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:08:38.509112  745981 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:08:38.543383  745981 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 19:08:38.543481  745981 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 19:08:38.598655  745981 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:46 SystemTime:2024-03-14 19:08:38.58967547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 19:08:38.598751  745981 docker.go:295] overlay module found
	I0314 19:08:38.601216  745981 out.go:177] * Using the docker driver based on user configuration
	I0314 19:08:38.602945  745981 start.go:297] selected driver: docker
	I0314 19:08:38.602953  745981 start.go:901] validating driver "docker" against <nil>
	I0314 19:08:38.602965  745981 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:08:38.603613  745981 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 19:08:38.660539  745981 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:46 SystemTime:2024-03-14 19:08:38.65095906 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 19:08:38.660716  745981 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 19:08:38.660943  745981 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 19:08:38.662923  745981 out.go:177] * Using Docker driver with root privileges
	I0314 19:08:38.664905  745981 cni.go:84] Creating CNI manager for ""
	I0314 19:08:38.664933  745981 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 19:08:38.664953  745981 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 19:08:38.665029  745981 start.go:340] cluster config:
	{Name:scheduled-stop-364114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-364114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:08:38.667705  745981 out.go:177] * Starting "scheduled-stop-364114" primary control-plane node in "scheduled-stop-364114" cluster
	I0314 19:08:38.669636  745981 cache.go:121] Beginning downloading kic base image for docker with docker
	I0314 19:08:38.672073  745981 out.go:177] * Pulling base image v0.0.42-1710284843-18375 ...
	I0314 19:08:38.674113  745981 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:08:38.674169  745981 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 19:08:38.674218  745981 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 19:08:38.674231  745981 cache.go:56] Caching tarball of preloaded images
	I0314 19:08:38.674327  745981 preload.go:173] Found /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 19:08:38.674333  745981 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 19:08:38.674671  745981 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/config.json ...
	I0314 19:08:38.674690  745981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/config.json: {Name:mk6ad7a644aa28074f535330d631c87cbfdcf780 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:38.689709  745981 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon, skipping pull
	I0314 19:08:38.689725  745981 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in daemon, skipping load
	I0314 19:08:38.689749  745981 cache.go:194] Successfully downloaded all kic artifacts
	I0314 19:08:38.689776  745981 start.go:360] acquireMachinesLock for scheduled-stop-364114: {Name:mk215caa987d8293f88a4fa5e1f93d57d8624816 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:08:38.689928  745981 start.go:364] duration metric: took 134.941µs to acquireMachinesLock for "scheduled-stop-364114"
	I0314 19:08:38.689956  745981 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-364114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-364114 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 19:08:38.690044  745981 start.go:125] createHost starting for "" (driver="docker")
	I0314 19:08:38.694179  745981 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0314 19:08:38.694439  745981 start.go:159] libmachine.API.Create for "scheduled-stop-364114" (driver="docker")
	I0314 19:08:38.694471  745981 client.go:168] LocalClient.Create starting
	I0314 19:08:38.694547  745981 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem
	I0314 19:08:38.694585  745981 main.go:141] libmachine: Decoding PEM data...
	I0314 19:08:38.694602  745981 main.go:141] libmachine: Parsing certificate...
	I0314 19:08:38.694654  745981 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-542901/.minikube/certs/cert.pem
	I0314 19:08:38.694670  745981 main.go:141] libmachine: Decoding PEM data...
	I0314 19:08:38.694678  745981 main.go:141] libmachine: Parsing certificate...
	I0314 19:08:38.695071  745981 cli_runner.go:164] Run: docker network inspect scheduled-stop-364114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0314 19:08:38.710275  745981 cli_runner.go:211] docker network inspect scheduled-stop-364114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0314 19:08:38.710352  745981 network_create.go:281] running [docker network inspect scheduled-stop-364114] to gather additional debugging logs...
	I0314 19:08:38.710367  745981 cli_runner.go:164] Run: docker network inspect scheduled-stop-364114
	W0314 19:08:38.724958  745981 cli_runner.go:211] docker network inspect scheduled-stop-364114 returned with exit code 1
	I0314 19:08:38.724978  745981 network_create.go:284] error running [docker network inspect scheduled-stop-364114]: docker network inspect scheduled-stop-364114: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-364114 not found
	I0314 19:08:38.725000  745981 network_create.go:286] output of [docker network inspect scheduled-stop-364114]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-364114 not found
	
	** /stderr **
	I0314 19:08:38.725096  745981 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0314 19:08:38.740936  745981 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ba5bd3ddb140 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9a:a3:d0:f1} reservation:<nil>}
	I0314 19:08:38.741307  745981 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-1c20f874d38c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:91:0e:f6:72} reservation:<nil>}
	I0314 19:08:38.741705  745981 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d2ca26b11847 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:39:e8:5c:36} reservation:<nil>}
	I0314 19:08:38.742141  745981 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025c2330}
	I0314 19:08:38.742158  745981 network_create.go:124] attempt to create docker network scheduled-stop-364114 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0314 19:08:38.742223  745981 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-364114 scheduled-stop-364114
	I0314 19:08:38.805546  745981 network_create.go:108] docker network scheduled-stop-364114 192.168.76.0/24 created
	I0314 19:08:38.805570  745981 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-364114" container
	I0314 19:08:38.805647  745981 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0314 19:08:38.820297  745981 cli_runner.go:164] Run: docker volume create scheduled-stop-364114 --label name.minikube.sigs.k8s.io=scheduled-stop-364114 --label created_by.minikube.sigs.k8s.io=true
	I0314 19:08:38.837103  745981 oci.go:103] Successfully created a docker volume scheduled-stop-364114
	I0314 19:08:38.837194  745981 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-364114-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-364114 --entrypoint /usr/bin/test -v scheduled-stop-364114:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -d /var/lib
	I0314 19:08:39.408695  745981 oci.go:107] Successfully prepared a docker volume scheduled-stop-364114
	I0314 19:08:39.408724  745981 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:08:39.408754  745981 kic.go:194] Starting extracting preloaded images to volume ...
	I0314 19:08:39.408853  745981 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-364114:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir
	I0314 19:08:43.489711  745981 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-364114:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f -I lz4 -xf /preloaded.tar -C /extractDir: (4.080805268s)
	I0314 19:08:43.492965  745981 kic.go:203] duration metric: took 4.08418893s to extract preloaded images to volume ...
	W0314 19:08:43.493149  745981 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0314 19:08:43.493270  745981 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0314 19:08:43.556726  745981 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-364114 --name scheduled-stop-364114 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-364114 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-364114 --network scheduled-stop-364114 --ip 192.168.76.2 --volume scheduled-stop-364114:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f
	I0314 19:08:43.875944  745981 cli_runner.go:164] Run: docker container inspect scheduled-stop-364114 --format={{.State.Running}}
	I0314 19:08:43.898161  745981 cli_runner.go:164] Run: docker container inspect scheduled-stop-364114 --format={{.State.Status}}
	I0314 19:08:43.919179  745981 cli_runner.go:164] Run: docker exec scheduled-stop-364114 stat /var/lib/dpkg/alternatives/iptables
	I0314 19:08:43.993673  745981 oci.go:144] the created container "scheduled-stop-364114" has a running status.
	I0314 19:08:43.993702  745981 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18384-542901/.minikube/machines/scheduled-stop-364114/id_rsa...
	I0314 19:08:44.298670  745981 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18384-542901/.minikube/machines/scheduled-stop-364114/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0314 19:08:44.332040  745981 cli_runner.go:164] Run: docker container inspect scheduled-stop-364114 --format={{.State.Status}}
	I0314 19:08:44.359637  745981 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0314 19:08:44.359649  745981 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-364114 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0314 19:08:44.456542  745981 cli_runner.go:164] Run: docker container inspect scheduled-stop-364114 --format={{.State.Status}}
	I0314 19:08:44.482643  745981 machine.go:94] provisionDockerMachine start ...
	I0314 19:08:44.482733  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:08:44.505683  745981 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:44.505955  745981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33709 <nil> <nil>}
	I0314 19:08:44.505962  745981 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:08:44.696940  745981 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-364114
	
	I0314 19:08:44.696954  745981 ubuntu.go:169] provisioning hostname "scheduled-stop-364114"
	I0314 19:08:44.697030  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:08:44.715210  745981 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:44.715439  745981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33709 <nil> <nil>}
	I0314 19:08:44.715448  745981 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-364114 && echo "scheduled-stop-364114" | sudo tee /etc/hostname
	I0314 19:08:44.880726  745981 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-364114
	
	I0314 19:08:44.880795  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:08:44.900144  745981 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:44.900387  745981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33709 <nil> <nil>}
	I0314 19:08:44.900403  745981 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-364114' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-364114/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-364114' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:08:45.068658  745981 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:08:45.068680  745981 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18384-542901/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-542901/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-542901/.minikube}
	I0314 19:08:45.068706  745981 ubuntu.go:177] setting up certificates
	I0314 19:08:45.068716  745981 provision.go:84] configureAuth start
	I0314 19:08:45.068792  745981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-364114
	I0314 19:08:45.091719  745981 provision.go:143] copyHostCerts
	I0314 19:08:45.091793  745981 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-542901/.minikube/ca.pem, removing ...
	I0314 19:08:45.091801  745981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-542901/.minikube/ca.pem
	I0314 19:08:45.091902  745981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-542901/.minikube/ca.pem (1078 bytes)
	I0314 19:08:45.092007  745981 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-542901/.minikube/cert.pem, removing ...
	I0314 19:08:45.092011  745981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-542901/.minikube/cert.pem
	I0314 19:08:45.092039  745981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-542901/.minikube/cert.pem (1123 bytes)
	I0314 19:08:45.092091  745981 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-542901/.minikube/key.pem, removing ...
	I0314 19:08:45.092094  745981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-542901/.minikube/key.pem
	I0314 19:08:45.092119  745981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-542901/.minikube/key.pem (1679 bytes)
	I0314 19:08:45.092165  745981 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-542901/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-364114 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-364114]
	I0314 19:08:46.747515  745981 provision.go:177] copyRemoteCerts
	I0314 19:08:46.747585  745981 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:08:46.747627  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:08:46.763124  745981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33709 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/scheduled-stop-364114/id_rsa Username:docker}
	I0314 19:08:46.862050  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0314 19:08:46.886230  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 19:08:46.910335  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:08:46.933905  745981 provision.go:87] duration metric: took 1.865176869s to configureAuth
	I0314 19:08:46.933922  745981 ubuntu.go:193] setting minikube options for container-runtime
	I0314 19:08:46.934107  745981 config.go:182] Loaded profile config "scheduled-stop-364114": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:08:46.934164  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:08:46.950077  745981 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:46.950314  745981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33709 <nil> <nil>}
	I0314 19:08:46.950321  745981 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 19:08:47.093833  745981 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0314 19:08:47.093846  745981 ubuntu.go:71] root file system type: overlay
	I0314 19:08:47.093963  745981 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 19:08:47.094028  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:08:47.110572  745981 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:47.110817  745981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33709 <nil> <nil>}
	I0314 19:08:47.110891  745981 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 19:08:47.261511  745981 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 19:08:47.261590  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:08:47.278715  745981 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:47.278948  745981 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e1e00] 0x3e4660 <nil>  [] 0s} 127.0.0.1 33709 <nil> <nil>}
	I0314 19:08:47.278963  745981 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 19:08:48.041967  745981 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-03-06 16:31:58.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-03-14 19:08:47.257075249 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0314 19:08:48.041990  745981 machine.go:97] duration metric: took 3.559335195s to provisionDockerMachine
	I0314 19:08:48.042001  745981 client.go:171] duration metric: took 9.347526151s to LocalClient.Create
	I0314 19:08:48.042021  745981 start.go:167] duration metric: took 9.347582142s to libmachine.API.Create "scheduled-stop-364114"
	I0314 19:08:48.042029  745981 start.go:293] postStartSetup for "scheduled-stop-364114" (driver="docker")
	I0314 19:08:48.042039  745981 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:08:48.042140  745981 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:08:48.042187  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:08:48.059740  745981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33709 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/scheduled-stop-364114/id_rsa Username:docker}
	I0314 19:08:48.162932  745981 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:08:48.166104  745981 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0314 19:08:48.166129  745981 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0314 19:08:48.166139  745981 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0314 19:08:48.166145  745981 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0314 19:08:48.166155  745981 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-542901/.minikube/addons for local assets ...
	I0314 19:08:48.166211  745981 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-542901/.minikube/files for local assets ...
	I0314 19:08:48.166303  745981 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-542901/.minikube/files/etc/ssl/certs/5483092.pem -> 5483092.pem in /etc/ssl/certs
	I0314 19:08:48.166405  745981 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:08:48.175070  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/files/etc/ssl/certs/5483092.pem --> /etc/ssl/certs/5483092.pem (1708 bytes)
	I0314 19:08:48.199226  745981 start.go:296] duration metric: took 157.18386ms for postStartSetup
	I0314 19:08:48.199634  745981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-364114
	I0314 19:08:48.215328  745981 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/config.json ...
	I0314 19:08:48.215626  745981 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 19:08:48.215665  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:08:48.232105  745981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33709 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/scheduled-stop-364114/id_rsa Username:docker}
	I0314 19:08:48.326315  745981 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0314 19:08:48.330582  745981 start.go:128] duration metric: took 9.64052368s to createHost
	I0314 19:08:48.330597  745981 start.go:83] releasing machines lock for "scheduled-stop-364114", held for 9.640660549s
	I0314 19:08:48.330665  745981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-364114
	I0314 19:08:48.345764  745981 ssh_runner.go:195] Run: cat /version.json
	I0314 19:08:48.345807  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:08:48.346048  745981 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:08:48.346082  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:08:48.369126  745981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33709 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/scheduled-stop-364114/id_rsa Username:docker}
	I0314 19:08:48.377552  745981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33709 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/scheduled-stop-364114/id_rsa Username:docker}
	I0314 19:08:48.464995  745981 ssh_runner.go:195] Run: systemctl --version
	I0314 19:08:48.594462  745981 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 19:08:48.598650  745981 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0314 19:08:48.624335  745981 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0314 19:08:48.624405  745981 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:08:48.653730  745981 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0314 19:08:48.653746  745981 start.go:494] detecting cgroup driver to use...
	I0314 19:08:48.653788  745981 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0314 19:08:48.653899  745981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:08:48.670732  745981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0314 19:08:48.680392  745981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 19:08:48.689900  745981 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 19:08:48.689989  745981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 19:08:48.699856  745981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:08:48.709192  745981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 19:08:48.718714  745981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 19:08:48.728114  745981 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:08:48.736831  745981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 19:08:48.746722  745981 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:08:48.755282  745981 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:08:48.763381  745981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:08:48.841252  745981 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 19:08:48.945390  745981 start.go:494] detecting cgroup driver to use...
	I0314 19:08:48.945489  745981 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0314 19:08:48.945539  745981 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 19:08:48.965248  745981 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0314 19:08:48.965317  745981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 19:08:48.977534  745981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:08:48.996359  745981 ssh_runner.go:195] Run: which cri-dockerd
	I0314 19:08:49.004287  745981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 19:08:49.013598  745981 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 19:08:49.032264  745981 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 19:08:49.139321  745981 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 19:08:49.234512  745981 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 19:08:49.234617  745981 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 19:08:49.255750  745981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:08:49.347189  745981 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 19:08:49.614723  745981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 19:08:49.627045  745981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:08:49.639594  745981 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 19:08:49.724282  745981 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 19:08:49.807777  745981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:08:49.897263  745981 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 19:08:49.911776  745981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 19:08:49.923501  745981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:08:50.017132  745981 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 19:08:50.097931  745981 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 19:08:50.098006  745981 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 19:08:50.103329  745981 start.go:562] Will wait 60s for crictl version
	I0314 19:08:50.103393  745981 ssh_runner.go:195] Run: which crictl
	I0314 19:08:50.108182  745981 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:08:50.160175  745981 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  25.0.4
	RuntimeApiVersion:  v1
	I0314 19:08:50.160262  745981 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:08:50.185136  745981 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 19:08:50.210435  745981 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 25.0.4 ...
	I0314 19:08:50.210553  745981 cli_runner.go:164] Run: docker network inspect scheduled-stop-364114 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0314 19:08:50.225838  745981 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0314 19:08:50.229569  745981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:08:50.240647  745981 kubeadm.go:877] updating cluster {Name:scheduled-stop-364114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-364114 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:08:50.240759  745981 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 19:08:50.240818  745981 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 19:08:50.259025  745981 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 19:08:50.259038  745981 docker.go:615] Images already preloaded, skipping extraction
	I0314 19:08:50.259099  745981 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 19:08:50.277170  745981 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 19:08:50.277185  745981 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:08:50.277201  745981 kubeadm.go:928] updating node { 192.168.76.2 8443 v1.28.4 docker true true} ...
	I0314 19:08:50.277307  745981 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=scheduled-stop-364114 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-364114 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:08:50.277373  745981 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 19:08:50.323470  745981 cni.go:84] Creating CNI manager for ""
	I0314 19:08:50.323489  745981 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 19:08:50.323498  745981 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:08:50.323516  745981 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-364114 NodeName:scheduled-stop-364114 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:08:50.323679  745981 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "scheduled-stop-364114"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:08:50.323745  745981 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:08:50.332668  745981 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:08:50.332734  745981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:08:50.341630  745981 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0314 19:08:50.359759  745981 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:08:50.377563  745981 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0314 19:08:50.395364  745981 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0314 19:08:50.398777  745981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:08:50.409735  745981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:08:50.499260  745981 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:08:50.516013  745981 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114 for IP: 192.168.76.2
	I0314 19:08:50.516024  745981 certs.go:194] generating shared ca certs ...
	I0314 19:08:50.516039  745981 certs.go:226] acquiring lock for ca certs: {Name:mk75d138939e967a050dd4b5a1fc56eb3400f415 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:50.516169  745981 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-542901/.minikube/ca.key
	I0314 19:08:50.516211  745981 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-542901/.minikube/proxy-client-ca.key
	I0314 19:08:50.516218  745981 certs.go:256] generating profile certs ...
	I0314 19:08:50.516273  745981 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/client.key
	I0314 19:08:50.516283  745981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/client.crt with IP's: []
	I0314 19:08:50.896414  745981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/client.crt ...
	I0314 19:08:50.896429  745981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/client.crt: {Name:mk96c870d006457e2306f3c674a101ca031ec053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:50.896623  745981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/client.key ...
	I0314 19:08:50.896631  745981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/client.key: {Name:mk09374f4cbc83a9fd081ddfd9e06823c18ed2ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:50.896724  745981 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.key.63a92084
	I0314 19:08:50.896737  745981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.crt.63a92084 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0314 19:08:51.158112  745981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.crt.63a92084 ...
	I0314 19:08:51.158127  745981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.crt.63a92084: {Name:mke0deb0a56ce692941665417510dfab5c7b6995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:51.158313  745981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.key.63a92084 ...
	I0314 19:08:51.158322  745981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.key.63a92084: {Name:mkfe5a789b11c06198365b62e2a0aec962b36269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:51.158406  745981 certs.go:381] copying /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.crt.63a92084 -> /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.crt
	I0314 19:08:51.158482  745981 certs.go:385] copying /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.key.63a92084 -> /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.key
	I0314 19:08:51.158553  745981 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/proxy-client.key
	I0314 19:08:51.158569  745981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/proxy-client.crt with IP's: []
	I0314 19:08:51.793964  745981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/proxy-client.crt ...
	I0314 19:08:51.793982  745981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/proxy-client.crt: {Name:mk55680b929811519c26fd1dc99f175063160816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:51.794191  745981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/proxy-client.key ...
	I0314 19:08:51.794201  745981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/proxy-client.key: {Name:mk1df48895d995e820ff73b62699cec51607a7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:51.794392  745981 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/548309.pem (1338 bytes)
	W0314 19:08:51.794429  745981 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-542901/.minikube/certs/548309_empty.pem, impossibly tiny 0 bytes
	I0314 19:08:51.794437  745981 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca-key.pem (1675 bytes)
	I0314 19:08:51.794459  745981 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/ca.pem (1078 bytes)
	I0314 19:08:51.794482  745981 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:08:51.794503  745981 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-542901/.minikube/certs/key.pem (1679 bytes)
	I0314 19:08:51.794542  745981 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-542901/.minikube/files/etc/ssl/certs/5483092.pem (1708 bytes)
	I0314 19:08:51.795166  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:08:51.820630  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0314 19:08:51.844667  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:08:51.868929  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 19:08:51.892421  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 19:08:51.916542  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 19:08:51.940794  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:08:51.965025  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/scheduled-stop-364114/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:08:51.989650  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/files/etc/ssl/certs/5483092.pem --> /usr/share/ca-certificates/5483092.pem (1708 bytes)
	I0314 19:08:52.023713  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:08:52.053404  745981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-542901/.minikube/certs/548309.pem --> /usr/share/ca-certificates/548309.pem (1338 bytes)
	I0314 19:08:52.081071  745981 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:08:52.100970  745981 ssh_runner.go:195] Run: openssl version
	I0314 19:08:52.106634  745981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5483092.pem && ln -fs /usr/share/ca-certificates/5483092.pem /etc/ssl/certs/5483092.pem"
	I0314 19:08:52.116352  745981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5483092.pem
	I0314 19:08:52.119676  745981 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:38 /usr/share/ca-certificates/5483092.pem
	I0314 19:08:52.119733  745981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5483092.pem
	I0314 19:08:52.126423  745981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5483092.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:08:52.135602  745981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:08:52.144875  745981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:08:52.148343  745981 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:33 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:08:52.148399  745981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:08:52.155125  745981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:08:52.164471  745981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/548309.pem && ln -fs /usr/share/ca-certificates/548309.pem /etc/ssl/certs/548309.pem"
	I0314 19:08:52.174049  745981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/548309.pem
	I0314 19:08:52.177505  745981 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:38 /usr/share/ca-certificates/548309.pem
	I0314 19:08:52.177580  745981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/548309.pem
	I0314 19:08:52.184588  745981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/548309.pem /etc/ssl/certs/51391683.0"
	I0314 19:08:52.194192  745981 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:08:52.197317  745981 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:08:52.197359  745981 kubeadm.go:391] StartCluster: {Name:scheduled-stop-364114 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-364114 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:08:52.197516  745981 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 19:08:52.213751  745981 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 19:08:52.222464  745981 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:08:52.231280  745981 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0314 19:08:52.231334  745981 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:08:52.239959  745981 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:08:52.239969  745981 kubeadm.go:156] found existing configuration files:
	
	I0314 19:08:52.240018  745981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:08:52.248533  745981 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:08:52.248586  745981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:08:52.256843  745981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:08:52.265293  745981 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:08:52.265345  745981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:08:52.273771  745981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:08:52.282553  745981 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:08:52.282618  745981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:08:52.291225  745981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:08:52.300060  745981 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:08:52.300122  745981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:08:52.308554  745981 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0314 19:08:52.355478  745981 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:08:52.355724  745981 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:08:52.405163  745981 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0314 19:08:52.405237  745981 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1055-aws
	I0314 19:08:52.405274  745981 kubeadm.go:309] OS: Linux
	I0314 19:08:52.405338  745981 kubeadm.go:309] CGROUPS_CPU: enabled
	I0314 19:08:52.405412  745981 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0314 19:08:52.405483  745981 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0314 19:08:52.405534  745981 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0314 19:08:52.405592  745981 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0314 19:08:52.405645  745981 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0314 19:08:52.405691  745981 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0314 19:08:52.405739  745981 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0314 19:08:52.405786  745981 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0314 19:08:52.477217  745981 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:08:52.477337  745981 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:08:52.477491  745981 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:08:52.793853  745981 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:08:52.797048  745981 out.go:204]   - Generating certificates and keys ...
	I0314 19:08:52.797217  745981 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:08:52.797293  745981 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:08:52.998360  745981 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 19:08:53.175679  745981 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 19:08:53.457328  745981 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 19:08:54.458336  745981 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 19:08:54.757782  745981 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 19:08:54.758056  745981 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-364114] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0314 19:08:55.157162  745981 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 19:08:55.157369  745981 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-364114] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0314 19:08:55.643362  745981 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 19:08:55.795011  745981 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 19:08:56.121970  745981 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 19:08:56.122096  745981 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:08:56.345155  745981 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:08:56.507695  745981 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:08:56.653377  745981 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:08:57.067130  745981 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:08:57.068006  745981 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:08:57.070958  745981 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:08:57.073411  745981 out.go:204]   - Booting up control plane ...
	I0314 19:08:57.073532  745981 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:08:57.073607  745981 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:08:57.086606  745981 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:08:57.098406  745981 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:08:57.099568  745981 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:08:57.099813  745981 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:08:57.207752  745981 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:09:05.212624  745981 kubeadm.go:309] [apiclient] All control plane components are healthy after 8.005535 seconds
	I0314 19:09:05.212759  745981 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:09:05.229480  745981 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:09:05.755622  745981 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:09:05.755813  745981 kubeadm.go:309] [mark-control-plane] Marking the node scheduled-stop-364114 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:09:06.268460  745981 kubeadm.go:309] [bootstrap-token] Using token: n72hs9.h183vnoqcpst9kfh
	I0314 19:09:06.270919  745981 out.go:204]   - Configuring RBAC rules ...
	I0314 19:09:06.271048  745981 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:09:06.277851  745981 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:09:06.286572  745981 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:09:06.291492  745981 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:09:06.298974  745981 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:09:06.303017  745981 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:09:06.318961  745981 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:09:06.559572  745981 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:09:06.683952  745981 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:09:06.685263  745981 kubeadm.go:309] 
	I0314 19:09:06.685328  745981 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:09:06.685332  745981 kubeadm.go:309] 
	I0314 19:09:06.685406  745981 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:09:06.685409  745981 kubeadm.go:309] 
	I0314 19:09:06.685459  745981 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:09:06.685515  745981 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:09:06.685563  745981 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:09:06.685566  745981 kubeadm.go:309] 
	I0314 19:09:06.685617  745981 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:09:06.685621  745981 kubeadm.go:309] 
	I0314 19:09:06.685665  745981 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:09:06.685672  745981 kubeadm.go:309] 
	I0314 19:09:06.685721  745981 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:09:06.685792  745981 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:09:06.685856  745981 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:09:06.685860  745981 kubeadm.go:309] 
	I0314 19:09:06.685940  745981 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:09:06.686015  745981 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:09:06.686018  745981 kubeadm.go:309] 
	I0314 19:09:06.686102  745981 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token n72hs9.h183vnoqcpst9kfh \
	I0314 19:09:06.686200  745981 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a4fe8fb4b69a78f77e63084830195d73baa70d21faafa8aaf573cb10334eb29d \
	I0314 19:09:06.686218  745981 kubeadm.go:309] 	--control-plane 
	I0314 19:09:06.686221  745981 kubeadm.go:309] 
	I0314 19:09:06.686302  745981 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:09:06.686305  745981 kubeadm.go:309] 
	I0314 19:09:06.686383  745981 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token n72hs9.h183vnoqcpst9kfh \
	I0314 19:09:06.686482  745981 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a4fe8fb4b69a78f77e63084830195d73baa70d21faafa8aaf573cb10334eb29d 
	I0314 19:09:06.690416  745981 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1055-aws\n", err: exit status 1
	I0314 19:09:06.690522  745981 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:09:06.690849  745981 cni.go:84] Creating CNI manager for ""
	I0314 19:09:06.690866  745981 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 19:09:06.693982  745981 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:09:06.696166  745981 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:09:06.716598  745981 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:09:06.737442  745981 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:09:06.737562  745981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:09:06.737636  745981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-364114 minikube.k8s.io/updated_at=2024_03_14T19_09_06_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=scheduled-stop-364114 minikube.k8s.io/primary=true
	I0314 19:09:07.044972  745981 ops.go:34] apiserver oom_adj: -16
	I0314 19:09:07.044990  745981 kubeadm.go:1106] duration metric: took 307.47905ms to wait for elevateKubeSystemPrivileges
	W0314 19:09:07.045005  745981 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:09:07.045010  745981 kubeadm.go:393] duration metric: took 14.847655992s to StartCluster
	I0314 19:09:07.045025  745981 settings.go:142] acquiring lock: {Name:mkfc2f1554604a8791fad9c92df19434d12a3d71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:09:07.045086  745981 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-542901/kubeconfig
	I0314 19:09:07.045806  745981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/kubeconfig: {Name:mkede4700b9e8f4a9de6d389efb476a6ed252758 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:09:07.046020  745981 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 19:09:07.048715  745981 out.go:177] * Verifying Kubernetes components...
	I0314 19:09:07.046108  745981 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 19:09:07.046282  745981 config.go:182] Loaded profile config "scheduled-stop-364114": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:09:07.046292  745981 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:09:07.049019  745981 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-364114"
	I0314 19:09:07.049044  745981 addons.go:234] Setting addon storage-provisioner=true in "scheduled-stop-364114"
	I0314 19:09:07.049071  745981 host.go:66] Checking if "scheduled-stop-364114" exists ...
	I0314 19:09:07.049600  745981 cli_runner.go:164] Run: docker container inspect scheduled-stop-364114 --format={{.State.Status}}
	I0314 19:09:07.049741  745981 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-364114"
	I0314 19:09:07.049761  745981 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-364114"
	I0314 19:09:07.049987  745981 cli_runner.go:164] Run: docker container inspect scheduled-stop-364114 --format={{.State.Status}}
	I0314 19:09:07.052933  745981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:09:07.087503  745981 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:09:07.090453  745981 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:09:07.090464  745981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:09:07.090530  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:09:07.097374  745981 addons.go:234] Setting addon default-storageclass=true in "scheduled-stop-364114"
	I0314 19:09:07.097403  745981 host.go:66] Checking if "scheduled-stop-364114" exists ...
	I0314 19:09:07.097893  745981 cli_runner.go:164] Run: docker container inspect scheduled-stop-364114 --format={{.State.Status}}
	I0314 19:09:07.139766  745981 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:09:07.139779  745981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:09:07.139842  745981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-364114
	I0314 19:09:07.141745  745981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33709 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/scheduled-stop-364114/id_rsa Username:docker}
	I0314 19:09:07.173121  745981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33709 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/scheduled-stop-364114/id_rsa Username:docker}
	I0314 19:09:07.360177  745981 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:09:07.360350  745981 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 19:09:07.411287  745981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:09:07.419472  745981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:09:08.473044  745981 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.112844352s)
	I0314 19:09:08.473952  745981 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:09:08.474004  745981 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:09:08.474145  745981 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.113779265s)
	I0314 19:09:08.474158  745981 start.go:948] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0314 19:09:08.647943  745981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.228443314s)
	I0314 19:09:08.648027  745981 api_server.go:72] duration metric: took 1.60198196s to wait for apiserver process to appear ...
	I0314 19:09:08.648037  745981 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:09:08.648055  745981 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0314 19:09:08.648181  745981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.236878164s)
	I0314 19:09:08.660242  745981 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0314 19:09:08.662689  745981 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0314 19:09:08.664417  745981 addons.go:505] duration metric: took 1.618120711s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0314 19:09:08.661873  745981 api_server.go:141] control plane version: v1.28.4
	I0314 19:09:08.664441  745981 api_server.go:131] duration metric: took 16.398204ms to wait for apiserver health ...
	I0314 19:09:08.664448  745981 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:09:08.671081  745981 system_pods.go:59] 5 kube-system pods found
	I0314 19:09:08.671101  745981 system_pods.go:61] "etcd-scheduled-stop-364114" [3a754417-46b5-47c4-bd2a-ad01da66f82d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:09:08.671109  745981 system_pods.go:61] "kube-apiserver-scheduled-stop-364114" [a78aada1-2a51-4c31-aad3-e41435be8eee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:09:08.671116  745981 system_pods.go:61] "kube-controller-manager-scheduled-stop-364114" [a1431328-1dfc-487f-92ca-476919111d09] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:09:08.671123  745981 system_pods.go:61] "kube-scheduler-scheduled-stop-364114" [25163bc0-2630-4aff-a90f-7ed98a8414dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:09:08.671129  745981 system_pods.go:61] "storage-provisioner" [02d2fd0f-640e-43a7-bea6-db74cd806da5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0314 19:09:08.671137  745981 system_pods.go:74] duration metric: took 6.683672ms to wait for pod list to return data ...
	I0314 19:09:08.671147  745981 kubeadm.go:576] duration metric: took 1.62510638s to wait for: map[apiserver:true system_pods:true]
	I0314 19:09:08.671158  745981 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:09:08.674671  745981 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0314 19:09:08.674690  745981 node_conditions.go:123] node cpu capacity is 2
	I0314 19:09:08.674700  745981 node_conditions.go:105] duration metric: took 3.537515ms to run NodePressure ...
	I0314 19:09:08.674712  745981 start.go:240] waiting for startup goroutines ...
	I0314 19:09:08.978099  745981 kapi.go:248] "coredns" deployment in "kube-system" namespace and "scheduled-stop-364114" context rescaled to 1 replicas
	I0314 19:09:08.978133  745981 start.go:245] waiting for cluster config update ...
	I0314 19:09:08.978144  745981 start.go:254] writing updated cluster config ...
	I0314 19:09:08.978504  745981 ssh_runner.go:195] Run: rm -f paused
	I0314 19:09:09.042607  745981 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:09:09.047308  745981 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-364114" cluster and "default" namespace by default
	
	
	==> Docker <==
	Mar 14 19:08:49 scheduled-stop-364114 dockerd[1141]: time="2024-03-14T19:08:49.440050734Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Mar 14 19:08:49 scheduled-stop-364114 dockerd[1141]: time="2024-03-14T19:08:49.451295744Z" level=info msg="Loading containers: start."
	Mar 14 19:08:49 scheduled-stop-364114 dockerd[1141]: time="2024-03-14T19:08:49.541111880Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Mar 14 19:08:49 scheduled-stop-364114 dockerd[1141]: time="2024-03-14T19:08:49.579229547Z" level=info msg="Loading containers: done."
	Mar 14 19:08:49 scheduled-stop-364114 dockerd[1141]: time="2024-03-14T19:08:49.589975836Z" level=info msg="Docker daemon" commit=061aa95 containerd-snapshotter=false storage-driver=overlay2 version=25.0.4
	Mar 14 19:08:49 scheduled-stop-364114 dockerd[1141]: time="2024-03-14T19:08:49.590053940Z" level=info msg="Daemon has completed initialization"
	Mar 14 19:08:49 scheduled-stop-364114 systemd[1]: Started Docker Application Container Engine.
	Mar 14 19:08:49 scheduled-stop-364114 dockerd[1141]: time="2024-03-14T19:08:49.611853723Z" level=info msg="API listen on /var/run/docker.sock"
	Mar 14 19:08:49 scheduled-stop-364114 dockerd[1141]: time="2024-03-14T19:08:49.612007142Z" level=info msg="API listen on [::]:2376"
	Mar 14 19:08:50 scheduled-stop-364114 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Mar 14 19:08:50 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Mar 14 19:08:50 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:50Z" level=info msg="Start docker client with request timeout 0s"
	Mar 14 19:08:50 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:50Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Mar 14 19:08:50 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:50Z" level=info msg="Loaded network plugin cni"
	Mar 14 19:08:50 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:50Z" level=info msg="Docker cri networking managed by network plugin cni"
	Mar 14 19:08:50 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:50Z" level=info msg="Docker Info: &{ID:cee107d8-1b98-480e-80bc-16c5e4b555c3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-03-14T19:08:50.082973165Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:1 NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 22.0
4.4 LTS (containerized) OSVersion:22.04 OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0x4000552700 NCPU:2 MemTotal:8215035904 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:scheduled-stop-364114 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] ProductLicense: D
efaultAddressPools:[] Warnings:[]}"
	Mar 14 19:08:50 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:50Z" level=info msg="Setting cgroupDriver cgroupfs"
	Mar 14 19:08:50 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:50Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Mar 14 19:08:50 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:50Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Mar 14 19:08:50 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:50Z" level=info msg="Start cri-dockerd grpc backend"
	Mar 14 19:08:50 scheduled-stop-364114 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Mar 14 19:08:59 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/18044836e8698e7c4e49ef521e5c23ece8c15e0175e58fc135611a1adc87924d/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options ndots:0 edns0 trust-ad]"
	Mar 14 19:08:59 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/abf4518d48a44daf23091b70199c5f1dd18db0b551e4864493bb1eb941ea1f47/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Mar 14 19:08:59 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef8af96921edcbad1eb58eb5d835bc577248944506dd27794761ed11b0d4f057/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Mar 14 19:08:59 scheduled-stop-364114 cri-dockerd[1351]: time="2024-03-14T19:08:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f3975abdfc4fe47be4bf4913980695ebab843a70c25e44087032ca036518ab29/resolv.conf as [nameserver 192.168.76.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	24aa481934aca       05c284c929889       11 seconds ago      Running             kube-scheduler            0                   f3975abdfc4fe       kube-scheduler-scheduled-stop-364114
	ad6393bda9e91       9961cbceaf234       11 seconds ago      Running             kube-controller-manager   0                   ef8af96921edc       kube-controller-manager-scheduled-stop-364114
	0047bbd197315       04b4c447bb9d4       11 seconds ago      Running             kube-apiserver            0                   abf4518d48a44       kube-apiserver-scheduled-stop-364114
	4105a6b66aa14       9cdd6470f48c8       11 seconds ago      Running             etcd                      0                   18044836e8698       etcd-scheduled-stop-364114
	
	
	==> describe nodes <==
	Name:               scheduled-stop-364114
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-364114
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=scheduled-stop-364114
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T19_09_06_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:09:03 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-364114
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:09:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:09:07 +0000   Thu, 14 Mar 2024 19:09:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:09:07 +0000   Thu, 14 Mar 2024 19:09:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:09:07 +0000   Thu, 14 Mar 2024 19:09:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:09:07 +0000   Thu, 14 Mar 2024 19:09:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-364114
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec0dfaa0f129496186f59ac62e38d54c
	  System UUID:                8cf38215-3170-4395-a1e2-25ad182adfe9
	  Boot ID:                    82438414-92b7-424c-b6a1-17a6c30d7d8a
	  Kernel Version:             5.15.0-1055-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://25.0.4
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-364114                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-scheduled-stop-364114             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-controller-manager-scheduled-stop-364114    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kube-scheduler-scheduled-stop-364114             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 12s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet  Node scheduled-stop-364114 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet  Node scheduled-stop-364114 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x7 over 12s)  kubelet  Node scheduled-stop-364114 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 4s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s                 kubelet  Node scheduled-stop-364114 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet  Node scheduled-stop-364114 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet  Node scheduled-stop-364114 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4s                 kubelet  Node scheduled-stop-364114 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3s                 kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                 kubelet  Node scheduled-stop-364114 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000726] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000955] FS-Cache: N-cookie d=0000000055f1b311{9p.inode} n=000000009fbc9a45
	[  +0.001085] FS-Cache: N-key=[8] 'c93c5c0100000000'
	[  +0.002623] FS-Cache: Duplicate cookie detected
	[  +0.000783] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.001038] FS-Cache: O-cookie d=0000000055f1b311{9p.inode} n=00000000f6e2501b
	[  +0.001084] FS-Cache: O-key=[8] 'c93c5c0100000000'
	[  +0.000710] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000950] FS-Cache: N-cookie d=0000000055f1b311{9p.inode} n=00000000aa1252b7
	[  +0.001111] FS-Cache: N-key=[8] 'c93c5c0100000000'
	[Mar14 18:42] FS-Cache: Duplicate cookie detected
	[  +0.000808] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001119] FS-Cache: O-cookie d=0000000055f1b311{9p.inode} n=000000000e87d01a
	[  +0.001090] FS-Cache: O-key=[8] 'c83c5c0100000000'
	[  +0.000799] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001005] FS-Cache: N-cookie d=0000000055f1b311{9p.inode} n=000000001fb81ac7
	[  +0.001085] FS-Cache: N-key=[8] 'c83c5c0100000000'
	[  +0.459527] FS-Cache: Duplicate cookie detected
	[  +0.000795] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000996] FS-Cache: O-cookie d=0000000055f1b311{9p.inode} n=00000000d9c1756f
	[  +0.001171] FS-Cache: O-key=[8] 'ce3c5c0100000000'
	[  +0.000748] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000993] FS-Cache: N-cookie d=0000000055f1b311{9p.inode} n=000000009fbc9a45
	[  +0.001060] FS-Cache: N-key=[8] 'ce3c5c0100000000'
	[Mar14 18:53] hrtimer: interrupt took 7536866 ns
	
	
	==> etcd [4105a6b66aa1] <==
	{"level":"info","ts":"2024-03-14T19:08:59.666139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-03-14T19:08:59.66624Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-03-14T19:08:59.678767Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T19:08:59.67901Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-03-14T19:08:59.679236Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-03-14T19:08:59.680055Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T19:08:59.680217Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T19:08:59.937491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-14T19:08:59.937724Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-14T19:08:59.937835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2024-03-14T19:08:59.938Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2024-03-14T19:08:59.9381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-03-14T19:08:59.938207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2024-03-14T19:08:59.938346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-03-14T19:08:59.940447Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:scheduled-stop-364114 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T19:08:59.940618Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:08:59.94201Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T19:08:59.942259Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:08:59.943344Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:08:59.944396Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-03-14T19:08:59.949503Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:08:59.949583Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T19:08:59.969859Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T19:08:59.970045Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:08:59.970189Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:09:10 up  3:51,  0 users,  load average: 2.94, 2.46, 2.77
	Linux scheduled-stop-364114 5.15.0-1055-aws #60~20.04.1-Ubuntu SMP Thu Feb 22 15:54:21 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [0047bbd19731] <==
	I0314 19:09:03.588824       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 19:09:03.588849       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 19:09:03.588865       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 19:09:03.589100       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 19:09:03.589111       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 19:09:03.590821       1 controller.go:624] quota admission added evaluator for: namespaces
	I0314 19:09:03.606734       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 19:09:03.607077       1 aggregator.go:166] initial CRD sync complete...
	I0314 19:09:03.607186       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 19:09:03.607284       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 19:09:03.607357       1 cache.go:39] Caches are synced for autoregister controller
	I0314 19:09:03.795413       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 19:09:04.197649       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0314 19:09:04.202505       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0314 19:09:04.202644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 19:09:04.842189       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 19:09:04.895036       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 19:09:05.058891       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0314 19:09:05.072594       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0314 19:09:05.074481       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 19:09:05.080524       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 19:09:05.447360       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 19:09:06.544290       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 19:09:06.558132       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0314 19:09:06.571018       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ad6393bda9e9] <==
	W0314 19:09:08.518255       1 shared_informer.go:593] resyncPeriod 15h2m5.19780494s is smaller than resyncCheckPeriod 22h12m8.055871528s and the informer has already started. Changing it to 22h12m8.055871528s
	I0314 19:09:08.518305       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0314 19:09:08.518361       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0314 19:09:08.518369       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0314 19:09:08.518406       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0314 19:09:08.518419       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0314 19:09:08.751998       1 controllermanager.go:642] "Started controller" controller="namespace-controller"
	I0314 19:09:08.752205       1 namespace_controller.go:197] "Starting namespace controller"
	I0314 19:09:08.752300       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0314 19:09:08.894105       1 controllermanager.go:642] "Started controller" controller="deployment-controller"
	I0314 19:09:08.894180       1 deployment_controller.go:168] "Starting controller" controller="deployment"
	I0314 19:09:08.894198       1 shared_informer.go:311] Waiting for caches to sync for deployment
	I0314 19:09:09.044705       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0314 19:09:09.045187       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0314 19:09:09.045209       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0314 19:09:09.193896       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:09:09.193967       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:09:09.344610       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I0314 19:09:09.344965       1 replica_set.go:214] "Starting controller" name="replicaset"
	I0314 19:09:09.345150       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I0314 19:09:09.494179       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I0314 19:09:09.494297       1 stateful_set.go:161] "Starting stateful set controller"
	I0314 19:09:09.494306       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0314 19:09:09.542936       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:09:09.542997       1 cleaner.go:83] "Starting CSR cleaner controller"
	
	
	==> kube-scheduler [24aa481934ac] <==
	W0314 19:09:03.989652       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 19:09:03.989666       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 19:09:03.989721       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 19:09:03.989736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 19:09:03.989797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 19:09:03.989812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 19:09:03.989853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 19:09:03.989864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 19:09:03.989984       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 19:09:03.990002       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 19:09:03.990036       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 19:09:03.990051       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 19:09:03.990086       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 19:09:03.990101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 19:09:03.990151       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 19:09:03.990168       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 19:09:03.990207       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 19:09:03.990223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 19:09:03.991470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 19:09:03.991502       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 19:09:03.991474       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 19:09:03.991523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 19:09:04.799730       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 19:09:04.799770       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:09:06.878693       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.075346    2461 topology_manager.go:215] "Topology Admit Handler" podUID="74ee2cddf208a6b1f67e289c25e6b495" podNamespace="kube-system" podName="kube-scheduler-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.102571    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e447e23d396c1c84104e5a904cea547b-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-364114\" (UID: \"e447e23d396c1c84104e5a904cea547b\") " pod="kube-system/kube-controller-manager-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.103027    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e447e23d396c1c84104e5a904cea547b-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-364114\" (UID: \"e447e23d396c1c84104e5a904cea547b\") " pod="kube-system/kube-controller-manager-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.103188    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74ee2cddf208a6b1f67e289c25e6b495-kubeconfig\") pod \"kube-scheduler-scheduled-stop-364114\" (UID: \"74ee2cddf208a6b1f67e289c25e6b495\") " pod="kube-system/kube-scheduler-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.103278    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/1bff586f67b13b615392632eac88045e-etcd-data\") pod \"etcd-scheduled-stop-364114\" (UID: \"1bff586f67b13b615392632eac88045e\") " pod="kube-system/etcd-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.103364    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4318140df70c339b7c9559c5775ee02-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-364114\" (UID: \"f4318140df70c339b7c9559c5775ee02\") " pod="kube-system/kube-apiserver-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.103446    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e447e23d396c1c84104e5a904cea547b-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-364114\" (UID: \"e447e23d396c1c84104e5a904cea547b\") " pod="kube-system/kube-controller-manager-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.103543    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e447e23d396c1c84104e5a904cea547b-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-364114\" (UID: \"e447e23d396c1c84104e5a904cea547b\") " pod="kube-system/kube-controller-manager-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.103651    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e447e23d396c1c84104e5a904cea547b-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-364114\" (UID: \"e447e23d396c1c84104e5a904cea547b\") " pod="kube-system/kube-controller-manager-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.103737    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/1bff586f67b13b615392632eac88045e-etcd-certs\") pod \"etcd-scheduled-stop-364114\" (UID: \"1bff586f67b13b615392632eac88045e\") " pod="kube-system/etcd-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.103821    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4318140df70c339b7c9559c5775ee02-k8s-certs\") pod \"kube-apiserver-scheduled-stop-364114\" (UID: \"f4318140df70c339b7c9559c5775ee02\") " pod="kube-system/kube-apiserver-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.103896    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e447e23d396c1c84104e5a904cea547b-ca-certs\") pod \"kube-controller-manager-scheduled-stop-364114\" (UID: \"e447e23d396c1c84104e5a904cea547b\") " pod="kube-system/kube-controller-manager-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.103981    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4318140df70c339b7c9559c5775ee02-ca-certs\") pod \"kube-apiserver-scheduled-stop-364114\" (UID: \"f4318140df70c339b7c9559c5775ee02\") " pod="kube-system/kube-apiserver-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.104065    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4318140df70c339b7c9559c5775ee02-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-364114\" (UID: \"f4318140df70c339b7c9559c5775ee02\") " pod="kube-system/kube-apiserver-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.104143    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4318140df70c339b7c9559c5775ee02-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-364114\" (UID: \"f4318140df70c339b7c9559c5775ee02\") " pod="kube-system/kube-apiserver-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.104233    2461 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e447e23d396c1c84104e5a904cea547b-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-364114\" (UID: \"e447e23d396c1c84104e5a904cea547b\") " pod="kube-system/kube-controller-manager-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: E0314 19:09:07.130158    2461 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-scheduled-stop-364114\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: E0314 19:09:07.138992    2461 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-scheduled-stop-364114\" already exists" pod="kube-system/kube-controller-manager-scheduled-stop-364114"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.182448    2461 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.631769    2461 apiserver.go:52] "Watching apiserver"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.698628    2461 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.924069    2461 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-364114" podStartSLOduration=0.92399646 podCreationTimestamp="2024-03-14 19:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:09:07.911971541 +0000 UTC m=+1.400610732" watchObservedRunningTime="2024-03-14 19:09:07.92399646 +0000 UTC m=+1.412635635"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.940281    2461 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-364114" podStartSLOduration=0.940227059 podCreationTimestamp="2024-03-14 19:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:09:07.924973915 +0000 UTC m=+1.413613098" watchObservedRunningTime="2024-03-14 19:09:07.940227059 +0000 UTC m=+1.428866242"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.940563    2461 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-364114" podStartSLOduration=2.9405388759999997 podCreationTimestamp="2024-03-14 19:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:09:07.939428819 +0000 UTC m=+1.428068002" watchObservedRunningTime="2024-03-14 19:09:07.940538876 +0000 UTC m=+1.429178059"
	Mar 14 19:09:07 scheduled-stop-364114 kubelet[2461]: I0314 19:09:07.980469    2461 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-364114" podStartSLOduration=3.980427636 podCreationTimestamp="2024-03-14 19:09:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-03-14 19:09:07.957040678 +0000 UTC m=+1.445679853" watchObservedRunningTime="2024-03-14 19:09:07.980427636 +0000 UTC m=+1.469066819"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-364114 -n scheduled-stop-364114
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-364114 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-364114 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-364114 describe pod storage-provisioner: exit status 1 (101.431087ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-364114 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-364114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-364114
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-364114: (2.144741607s)
--- FAIL: TestScheduledStopUnix (35.30s)

                                                
                                    

Test pass (321/350)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.31
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 11.84
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.22
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.29.0-rc.2/json-events 12.79
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.09
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.22
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.58
31 TestOffline 95.43
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.13
36 TestAddons/Setup 147.89
38 TestAddons/parallel/Registry 16.29
40 TestAddons/parallel/InspektorGadget 11.92
41 TestAddons/parallel/MetricsServer 6.88
44 TestAddons/parallel/CSI 38.63
45 TestAddons/parallel/Headlamp 12.42
46 TestAddons/parallel/CloudSpanner 5.52
47 TestAddons/parallel/LocalPath 52.36
48 TestAddons/parallel/NvidiaDevicePlugin 5.57
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.18
53 TestAddons/StoppedEnableDisable 11.2
54 TestCertOptions 38.26
55 TestCertExpiration 253.71
56 TestDockerFlags 46.47
57 TestForceSystemdFlag 36.88
58 TestForceSystemdEnv 43.89
64 TestErrorSpam/setup 32.08
65 TestErrorSpam/start 0.79
66 TestErrorSpam/status 1.07
67 TestErrorSpam/pause 1.3
68 TestErrorSpam/unpause 1.54
69 TestErrorSpam/stop 2.05
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 84.39
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 36.62
76 TestFunctional/serial/KubeContext 0.07
77 TestFunctional/serial/KubectlGetPods 0.1
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.91
81 TestFunctional/serial/CacheCmd/cache/add_local 1.07
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.08
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.17
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
89 TestFunctional/serial/ExtraConfig 41.9
90 TestFunctional/serial/ComponentHealth 0.12
91 TestFunctional/serial/LogsCmd 1.24
92 TestFunctional/serial/LogsFileCmd 1.22
93 TestFunctional/serial/InvalidService 4.83
95 TestFunctional/parallel/ConfigCmd 0.59
96 TestFunctional/parallel/DashboardCmd 14.61
97 TestFunctional/parallel/DryRun 0.47
98 TestFunctional/parallel/InternationalLanguage 0.23
99 TestFunctional/parallel/StatusCmd 1.19
103 TestFunctional/parallel/ServiceCmdConnect 13.66
104 TestFunctional/parallel/AddonsCmd 0.2
105 TestFunctional/parallel/PersistentVolumeClaim 28.32
107 TestFunctional/parallel/SSHCmd 0.71
108 TestFunctional/parallel/CpCmd 2.36
110 TestFunctional/parallel/FileSync 0.38
111 TestFunctional/parallel/CertSync 2.09
115 TestFunctional/parallel/NodeLabels 0.1
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.39
119 TestFunctional/parallel/License 0.27
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.42
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
133 TestFunctional/parallel/ProfileCmd/profile_list 0.43
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
135 TestFunctional/parallel/ServiceCmd/List 0.59
136 TestFunctional/parallel/MountCmd/any-port 7.74
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.73
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
139 TestFunctional/parallel/ServiceCmd/Format 0.51
140 TestFunctional/parallel/ServiceCmd/URL 0.49
141 TestFunctional/parallel/Version/short 0.11
142 TestFunctional/parallel/Version/components 1.21
143 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
144 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
145 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
146 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
147 TestFunctional/parallel/ImageCommands/ImageBuild 2.75
148 TestFunctional/parallel/ImageCommands/Setup 1.99
149 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.62
150 TestFunctional/parallel/MountCmd/specific-port 2.3
151 TestFunctional/parallel/MountCmd/VerifyCleanup 2.51
152 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.29
153 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.19
154 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.62
155 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
156 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.46
157 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.95
158 TestFunctional/parallel/DockerEnv/bash 1.09
159 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
160 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
161 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
162 TestFunctional/delete_addon-resizer_images 0.08
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMutliControlPlane/serial/StartCluster 145.03
169 TestMutliControlPlane/serial/DeployApp 7.45
170 TestMutliControlPlane/serial/PingHostFromPods 1.96
171 TestMutliControlPlane/serial/AddWorkerNode 27.1
172 TestMutliControlPlane/serial/NodeLabels 0.12
173 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.8
174 TestMutliControlPlane/serial/CopyFile 21.32
175 TestMutliControlPlane/serial/StopSecondaryNode 11.72
176 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
177 TestMutliControlPlane/serial/RestartSecondaryNode 39.63
178 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.54
179 TestMutliControlPlane/serial/RestartClusterKeepsNodes 243.26
180 TestMutliControlPlane/serial/DeleteSecondaryNode 8.93
181 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.57
182 TestMutliControlPlane/serial/StopCluster 23.51
183 TestMutliControlPlane/serial/RestartCluster 93.88
184 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.58
185 TestMutliControlPlane/serial/AddSecondaryNode 45.73
186 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
189 TestImageBuild/serial/Setup 31.73
190 TestImageBuild/serial/NormalBuild 1.98
191 TestImageBuild/serial/BuildWithBuildArg 0.97
192 TestImageBuild/serial/BuildWithDockerIgnore 0.77
193 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.78
197 TestJSONOutput/start/Command 56.47
198 TestJSONOutput/start/Audit 0
200 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/pause/Command 0.6
204 TestJSONOutput/pause/Audit 0
206 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
209 TestJSONOutput/unpause/Command 0.54
210 TestJSONOutput/unpause/Audit 0
212 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
213 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
215 TestJSONOutput/stop/Command 5.78
216 TestJSONOutput/stop/Audit 0
218 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
219 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
220 TestErrorJSONOutput 0.25
222 TestKicCustomNetwork/create_custom_network 33.5
223 TestKicCustomNetwork/use_default_bridge_network 33.49
224 TestKicExistingNetwork 38.51
225 TestKicCustomSubnet 39.46
226 TestKicStaticIP 36.73
227 TestMainNoArgs 0.07
228 TestMinikubeProfile 76.29
231 TestMountStart/serial/StartWithMountFirst 9.3
232 TestMountStart/serial/VerifyMountFirst 0.29
233 TestMountStart/serial/StartWithMountSecond 7.6
234 TestMountStart/serial/VerifyMountSecond 0.29
235 TestMountStart/serial/DeleteFirst 1.48
236 TestMountStart/serial/VerifyMountPostDelete 0.27
237 TestMountStart/serial/Stop 1.22
238 TestMountStart/serial/RestartStopped 8.32
239 TestMountStart/serial/VerifyMountPostStop 0.29
242 TestMultiNode/serial/FreshStart2Nodes 81.53
243 TestMultiNode/serial/DeployApp2Nodes 36.62
244 TestMultiNode/serial/PingHostFrom2Pods 1.16
245 TestMultiNode/serial/AddNode 20.61
246 TestMultiNode/serial/MultiNodeLabels 0.09
247 TestMultiNode/serial/ProfileList 0.34
248 TestMultiNode/serial/CopyFile 10.99
249 TestMultiNode/serial/StopNode 2.35
250 TestMultiNode/serial/StartAfterStop 11.73
251 TestMultiNode/serial/RestartKeepsNodes 70.41
252 TestMultiNode/serial/DeleteNode 5.62
253 TestMultiNode/serial/StopMultiNode 21.68
254 TestMultiNode/serial/RestartMultiNode 32.21
255 TestMultiNode/serial/ValidateNameConflict 34.82
260 TestPreload 154.89
263 TestSkaffold 124.41
265 TestInsufficientStorage 11.74
266 TestRunningBinaryUpgrade 119.31
268 TestKubernetesUpgrade 126.7
269 TestMissingContainerUpgrade 113.64
281 TestStoppedBinaryUpgrade/Setup 1.39
282 TestStoppedBinaryUpgrade/Upgrade 92.16
284 TestPause/serial/Start 99.46
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.79
294 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
295 TestNoKubernetes/serial/StartWithK8s 41.51
296 TestNoKubernetes/serial/StartWithStopK8s 16.83
297 TestNoKubernetes/serial/Start 9.71
298 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
299 TestNoKubernetes/serial/ProfileList 1.09
300 TestNoKubernetes/serial/Stop 1.23
301 TestNoKubernetes/serial/StartNoArgs 7.65
302 TestPause/serial/SecondStartNoReconfiguration 39.17
303 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
304 TestNetworkPlugins/group/auto/Start 93.04
305 TestPause/serial/Pause 0.9
306 TestPause/serial/VerifyStatus 0.58
307 TestPause/serial/Unpause 0.84
308 TestPause/serial/PauseAgain 0.94
309 TestPause/serial/DeletePaused 2.42
310 TestPause/serial/VerifyDeletedResources 0.75
311 TestNetworkPlugins/group/kindnet/Start 69.79
312 TestNetworkPlugins/group/auto/KubeletFlags 0.37
313 TestNetworkPlugins/group/auto/NetCatPod 11.35
314 TestNetworkPlugins/group/auto/DNS 0.35
315 TestNetworkPlugins/group/auto/Localhost 0.22
316 TestNetworkPlugins/group/auto/HairPin 0.21
317 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
318 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
319 TestNetworkPlugins/group/kindnet/NetCatPod 11.37
320 TestNetworkPlugins/group/kindnet/DNS 0.35
321 TestNetworkPlugins/group/kindnet/Localhost 0.35
322 TestNetworkPlugins/group/kindnet/HairPin 0.28
323 TestNetworkPlugins/group/calico/Start 97.79
324 TestNetworkPlugins/group/custom-flannel/Start 71.03
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.37
327 TestNetworkPlugins/group/calico/ControllerPod 6.01
328 TestNetworkPlugins/group/calico/KubeletFlags 0.31
329 TestNetworkPlugins/group/calico/NetCatPod 11.29
330 TestNetworkPlugins/group/custom-flannel/DNS 0.29
331 TestNetworkPlugins/group/custom-flannel/Localhost 0.26
332 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
333 TestNetworkPlugins/group/calico/DNS 0.48
334 TestNetworkPlugins/group/calico/Localhost 0.25
335 TestNetworkPlugins/group/calico/HairPin 0.21
336 TestNetworkPlugins/group/false/Start 101.69
337 TestNetworkPlugins/group/enable-default-cni/Start 95.65
338 TestNetworkPlugins/group/false/KubeletFlags 0.32
339 TestNetworkPlugins/group/false/NetCatPod 11.33
340 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
341 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.34
342 TestNetworkPlugins/group/false/DNS 0.19
343 TestNetworkPlugins/group/false/Localhost 0.17
344 TestNetworkPlugins/group/false/HairPin 0.2
345 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
346 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
347 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
348 TestNetworkPlugins/group/flannel/Start 71.8
349 TestNetworkPlugins/group/bridge/Start 94.8
350 TestNetworkPlugins/group/flannel/ControllerPod 6.01
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
352 TestNetworkPlugins/group/flannel/NetCatPod 10.3
353 TestNetworkPlugins/group/flannel/DNS 0.2
354 TestNetworkPlugins/group/flannel/Localhost 0.19
355 TestNetworkPlugins/group/flannel/HairPin 0.18
356 TestNetworkPlugins/group/bridge/KubeletFlags 0.46
357 TestNetworkPlugins/group/bridge/NetCatPod 11.44
358 TestNetworkPlugins/group/bridge/DNS 0.36
359 TestNetworkPlugins/group/bridge/Localhost 0.2
360 TestNetworkPlugins/group/bridge/HairPin 0.19
361 TestNetworkPlugins/group/kubenet/Start 96.84
363 TestStartStop/group/old-k8s-version/serial/FirstStart 150.24
364 TestNetworkPlugins/group/kubenet/KubeletFlags 0.32
365 TestNetworkPlugins/group/kubenet/NetCatPod 10.3
366 TestNetworkPlugins/group/kubenet/DNS 0.2
367 TestNetworkPlugins/group/kubenet/Localhost 0.19
368 TestNetworkPlugins/group/kubenet/HairPin 0.17
370 TestStartStop/group/no-preload/serial/FirstStart 66.83
371 TestStartStop/group/old-k8s-version/serial/DeployApp 9.63
372 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.52
373 TestStartStop/group/old-k8s-version/serial/Stop 11.33
374 TestStartStop/group/no-preload/serial/DeployApp 9.41
375 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
376 TestStartStop/group/old-k8s-version/serial/SecondStart 376.31
377 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.71
378 TestStartStop/group/no-preload/serial/Stop 11.29
379 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.31
380 TestStartStop/group/no-preload/serial/SecondStart 266.88
381 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
383 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
384 TestStartStop/group/no-preload/serial/Pause 3.13
386 TestStartStop/group/embed-certs/serial/FirstStart 87.24
387 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
388 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
389 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
390 TestStartStop/group/old-k8s-version/serial/Pause 2.91
391 TestStartStop/group/embed-certs/serial/DeployApp 8.38
393 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.39
394 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
395 TestStartStop/group/embed-certs/serial/Stop 11.03
396 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.33
397 TestStartStop/group/embed-certs/serial/SecondStart 272.32
398 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.52
399 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.24
400 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.79
401 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
402 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 270.37
403 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
404 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
405 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
406 TestStartStop/group/embed-certs/serial/Pause 3.29
408 TestStartStop/group/newest-cni/serial/FirstStart 46.72
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.17
411 TestStartStop/group/newest-cni/serial/DeployApp 0
412 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.59
413 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.33
414 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.58
415 TestStartStop/group/newest-cni/serial/Stop 6.07
416 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
417 TestStartStop/group/newest-cni/serial/SecondStart 17.86
418 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
419 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
420 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
421 TestStartStop/group/newest-cni/serial/Pause 2.82
x
+
TestDownloadOnly/v1.20.0/json-events (14.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-747918 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-747918 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (14.306028568s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-747918
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-747918: exit status 85 (81.412695ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-747918 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC |          |
	|         | -p download-only-747918        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:32:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:32:21.219045  548314 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:32:21.219272  548314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:32:21.219299  548314 out.go:304] Setting ErrFile to fd 2...
	I0314 18:32:21.219318  548314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:32:21.219610  548314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
	W0314 18:32:21.219778  548314 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18384-542901/.minikube/config/config.json: open /home/jenkins/minikube-integration/18384-542901/.minikube/config/config.json: no such file or directory
	I0314 18:32:21.220242  548314 out.go:298] Setting JSON to true
	I0314 18:32:21.221168  548314 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11685,"bootTime":1710429457,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 18:32:21.221265  548314 start.go:139] virtualization:  
	I0314 18:32:21.224906  548314 out.go:97] [download-only-747918] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	W0314 18:32:21.225081  548314 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball: no such file or directory
	I0314 18:32:21.227340  548314 out.go:169] MINIKUBE_LOCATION=18384
	I0314 18:32:21.225205  548314 notify.go:220] Checking for updates...
	I0314 18:32:21.231210  548314 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:32:21.233255  548314 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	I0314 18:32:21.235418  548314 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	I0314 18:32:21.237631  548314 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0314 18:32:21.241842  548314 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 18:32:21.242120  548314 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:32:21.262488  548314 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 18:32:21.262609  548314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:32:21.327565  548314 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-14 18:32:21.3180211 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 18:32:21.327678  548314 docker.go:295] overlay module found
	I0314 18:32:21.329775  548314 out.go:97] Using the docker driver based on user configuration
	I0314 18:32:21.329812  548314 start.go:297] selected driver: docker
	I0314 18:32:21.329822  548314 start.go:901] validating driver "docker" against <nil>
	I0314 18:32:21.329925  548314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:32:21.382956  548314 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-03-14 18:32:21.373572357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 18:32:21.383129  548314 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 18:32:21.383424  548314 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0314 18:32:21.383586  548314 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 18:32:21.385910  548314 out.go:169] Using Docker driver with root privileges
	I0314 18:32:21.388154  548314 cni.go:84] Creating CNI manager for ""
	I0314 18:32:21.388185  548314 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0314 18:32:21.388276  548314 start.go:340] cluster config:
	{Name:download-only-747918 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-747918 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:32:21.390461  548314 out.go:97] Starting "download-only-747918" primary control-plane node in "download-only-747918" cluster
	I0314 18:32:21.390483  548314 cache.go:121] Beginning downloading kic base image for docker with docker
	I0314 18:32:21.392614  548314 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0314 18:32:21.392639  548314 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 18:32:21.392802  548314 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 18:32:21.407344  548314 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 18:32:21.407999  548314 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0314 18:32:21.408104  548314 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 18:32:21.469266  548314 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0314 18:32:21.469301  548314 cache.go:56] Caching tarball of preloaded images
	I0314 18:32:21.470149  548314 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 18:32:21.472787  548314 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0314 18:32:21.472857  548314 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0314 18:32:21.588178  548314 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0314 18:32:27.727385  548314 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	
	
	* The control-plane node download-only-747918 host does not exist
	  To start a cluster, run: "minikube start -p download-only-747918"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-747918
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (11.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-206632 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-206632 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.836429953s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (11.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-206632
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-206632: exit status 85 (83.416515ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-747918 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC |                     |
	|         | -p download-only-747918        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC | 14 Mar 24 18:32 UTC |
	| delete  | -p download-only-747918        | download-only-747918 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC | 14 Mar 24 18:32 UTC |
	| start   | -o=json --download-only        | download-only-206632 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC |                     |
	|         | -p download-only-206632        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:32:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:32:35.971019  548482 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:32:35.971159  548482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:32:35.971195  548482 out.go:304] Setting ErrFile to fd 2...
	I0314 18:32:35.971207  548482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:32:35.971463  548482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
	I0314 18:32:35.971913  548482 out.go:298] Setting JSON to true
	I0314 18:32:35.972803  548482 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11699,"bootTime":1710429457,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 18:32:35.972882  548482 start.go:139] virtualization:  
	I0314 18:32:35.975420  548482 out.go:97] [download-only-206632] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 18:32:35.977593  548482 out.go:169] MINIKUBE_LOCATION=18384
	I0314 18:32:35.975605  548482 notify.go:220] Checking for updates...
	I0314 18:32:35.981441  548482 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:32:35.983783  548482 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	I0314 18:32:35.985775  548482 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	I0314 18:32:35.987517  548482 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0314 18:32:35.991318  548482 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 18:32:35.991617  548482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:32:36.020486  548482 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 18:32:36.020660  548482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:32:36.091033  548482 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 18:32:36.080652073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 18:32:36.091216  548482 docker.go:295] overlay module found
	I0314 18:32:36.093139  548482 out.go:97] Using the docker driver based on user configuration
	I0314 18:32:36.093168  548482 start.go:297] selected driver: docker
	I0314 18:32:36.093175  548482 start.go:901] validating driver "docker" against <nil>
	I0314 18:32:36.093298  548482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:32:36.149025  548482 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 18:32:36.139463799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 18:32:36.149210  548482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 18:32:36.149533  548482 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0314 18:32:36.149724  548482 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 18:32:36.152176  548482 out.go:169] Using Docker driver with root privileges
	I0314 18:32:36.154329  548482 cni.go:84] Creating CNI manager for ""
	I0314 18:32:36.154377  548482 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 18:32:36.154389  548482 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 18:32:36.154480  548482 start.go:340] cluster config:
	{Name:download-only-206632 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-206632 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:32:36.156619  548482 out.go:97] Starting "download-only-206632" primary control-plane node in "download-only-206632" cluster
	I0314 18:32:36.156652  548482 cache.go:121] Beginning downloading kic base image for docker with docker
	I0314 18:32:36.158579  548482 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0314 18:32:36.158610  548482 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:32:36.158749  548482 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 18:32:36.176454  548482 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 18:32:36.176604  548482 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0314 18:32:36.176624  548482 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0314 18:32:36.176628  548482 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0314 18:32:36.176637  548482 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0314 18:32:36.234360  548482 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 18:32:36.234385  548482 cache.go:56] Caching tarball of preloaded images
	I0314 18:32:36.235454  548482 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:32:36.237964  548482 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0314 18:32:36.237990  548482 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0314 18:32:36.355720  548482 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 18:32:40.788150  548482 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0314 18:32:40.788270  548482 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0314 18:32:41.692020  548482 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 18:32:41.692441  548482 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/download-only-206632/config.json ...
	I0314 18:32:41.692481  548482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/download-only-206632/config.json: {Name:mk11f60a880e8217f1cb6d669ef1f75df5739c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:32:41.692689  548482 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 18:32:41.693361  548482 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18384-542901/.minikube/cache/linux/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-206632 host does not exist
	  To start a cluster, run: "minikube start -p download-only-206632"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-206632
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (12.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-800676 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-800676 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.791620487s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (12.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-800676
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-800676: exit status 85 (92.092032ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-747918 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC |                     |
	|         | -p download-only-747918           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC | 14 Mar 24 18:32 UTC |
	| delete  | -p download-only-747918           | download-only-747918 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC | 14 Mar 24 18:32 UTC |
	| start   | -o=json --download-only           | download-only-206632 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC |                     |
	|         | -p download-only-206632           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC | 14 Mar 24 18:32 UTC |
	| delete  | -p download-only-206632           | download-only-206632 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC | 14 Mar 24 18:32 UTC |
	| start   | -o=json --download-only           | download-only-800676 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC |                     |
	|         | -p download-only-800676           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:32:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:32:48.261102  548643 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:32:48.261289  548643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:32:48.261318  548643 out.go:304] Setting ErrFile to fd 2...
	I0314 18:32:48.261341  548643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:32:48.261631  548643 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
	I0314 18:32:48.262070  548643 out.go:298] Setting JSON to true
	I0314 18:32:48.262942  548643 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11712,"bootTime":1710429457,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 18:32:48.263043  548643 start.go:139] virtualization:  
	I0314 18:32:48.265906  548643 out.go:97] [download-only-800676] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 18:32:48.268164  548643 out.go:169] MINIKUBE_LOCATION=18384
	I0314 18:32:48.266137  548643 notify.go:220] Checking for updates...
	I0314 18:32:48.270000  548643 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:32:48.271716  548643 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	I0314 18:32:48.274155  548643 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	I0314 18:32:48.276447  548643 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0314 18:32:48.279927  548643 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 18:32:48.280199  548643 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:32:48.303916  548643 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 18:32:48.304009  548643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:32:48.358542  548643 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 18:32:48.349796884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 18:32:48.358646  548643 docker.go:295] overlay module found
	I0314 18:32:48.367959  548643 out.go:97] Using the docker driver based on user configuration
	I0314 18:32:48.367997  548643 start.go:297] selected driver: docker
	I0314 18:32:48.368006  548643 start.go:901] validating driver "docker" against <nil>
	I0314 18:32:48.368161  548643 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:32:48.422475  548643 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-03-14 18:32:48.413857209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 18:32:48.422648  548643 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 18:32:48.422921  548643 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0314 18:32:48.423088  548643 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 18:32:48.425238  548643 out.go:169] Using Docker driver with root privileges
	I0314 18:32:48.427007  548643 cni.go:84] Creating CNI manager for ""
	I0314 18:32:48.427046  548643 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 18:32:48.427056  548643 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 18:32:48.427154  548643 start.go:340] cluster config:
	{Name:download-only-800676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-800676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:32:48.429091  548643 out.go:97] Starting "download-only-800676" primary control-plane node in "download-only-800676" cluster
	I0314 18:32:48.429119  548643 cache.go:121] Beginning downloading kic base image for docker with docker
	I0314 18:32:48.430982  548643 out.go:97] Pulling base image v0.0.42-1710284843-18375 ...
	I0314 18:32:48.431007  548643 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 18:32:48.431182  548643 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local docker daemon
	I0314 18:32:48.445536  548643 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f to local cache
	I0314 18:32:48.445666  548643 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory
	I0314 18:32:48.445689  548643 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f in local cache directory, skipping pull
	I0314 18:32:48.445698  548643 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f exists in cache, skipping pull
	I0314 18:32:48.445706  548643 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f as a tarball
	I0314 18:32:48.493553  548643 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0314 18:32:48.493577  548643 cache.go:56] Caching tarball of preloaded images
	I0314 18:32:48.494270  548643 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 18:32:48.496534  548643 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0314 18:32:48.496557  548643 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0314 18:32:48.584471  548643 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0314 18:32:54.134890  548643 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0314 18:32:54.135015  548643 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18384-542901/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0314 18:32:54.932624  548643 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0314 18:32:54.933006  548643 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/download-only-800676/config.json ...
	I0314 18:32:54.933043  548643 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/download-only-800676/config.json: {Name:mkc696fe4e2555375f1abb9b6ae57832fe4823ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:32:54.933669  548643 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 18:32:54.934221  548643 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18384-542901/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-800676 host does not exist
	  To start a cluster, run: "minikube start -p download-only-800676"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-800676
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-030368 --alsologtostderr --binary-mirror http://127.0.0.1:39555 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-030368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-030368
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (95.43s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-734760 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-734760 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m33.066550955s)
helpers_test.go:175: Cleaning up "offline-docker-734760" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-734760
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-734760: (2.363932834s)
--- PASS: TestOffline (95.43s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-511560
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-511560: exit status 85 (80.084664ms)

                                                
                                                
-- stdout --
	* Profile "addons-511560" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-511560"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.13s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-511560
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-511560: exit status 85 (126.972836ms)

                                                
                                                
-- stdout --
	* Profile "addons-511560" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-511560"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.13s)

                                                
                                    
x
+
TestAddons/Setup (147.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-511560 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-511560 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m27.882936551s)
--- PASS: TestAddons/Setup (147.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 43.319853ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5r7v7" [6a23b094-28b6-478f-9c8d-f6ba7e0d8f45] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004836582s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gxw9p" [07a23dc8-cdad-49d7-b3d7-652ef0b03f9c] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008381725s
addons_test.go:340: (dbg) Run:  kubectl --context addons-511560 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-511560 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-511560 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.909814934s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 ip
2024/03/14 18:35:46 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.29s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hp74l" [9c44420f-a079-416f-9f52-49a06dc3d4a5] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.01892002s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-511560
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-511560: (5.897580714s)
--- PASS: TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 5.863079ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-v4k6z" [edf6e7c7-a19a-443d-abe6-43d7c38ef033] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005387621s
addons_test.go:415: (dbg) Run:  kubectl --context addons-511560 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (38.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 42.164537ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-511560 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-511560 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [b277eb31-4910-4a08-af44-9129170c393c] Pending
helpers_test.go:344: "task-pv-pod" [b277eb31-4910-4a08-af44-9129170c393c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [b277eb31-4910-4a08-af44-9129170c393c] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005674099s
addons_test.go:584: (dbg) Run:  kubectl --context addons-511560 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-511560 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-511560 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-511560 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-511560 delete pod task-pv-pod: (1.328833985s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-511560 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-511560 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-511560 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f4a34fd9-72b7-480f-b277-4b0885540d5f] Pending
helpers_test.go:344: "task-pv-pod-restore" [f4a34fd9-72b7-480f-b277-4b0885540d5f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f4a34fd9-72b7-480f-b277-4b0885540d5f] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003923332s
addons_test.go:626: (dbg) Run:  kubectl --context addons-511560 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-511560 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-511560 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-511560 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.759543317s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (38.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-511560 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-511560 --alsologtostderr -v=1: (1.414615042s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-4v74p" [6bc59809-fa6f-4e48-8ca1-c14750812cf7] Pending
helpers_test.go:344: "headlamp-5485c556b-4v74p" [6bc59809-fa6f-4e48-8ca1-c14750812cf7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-4v74p" [6bc59809-fa6f-4e48-8ca1-c14750812cf7] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004670873s
--- PASS: TestAddons/parallel/Headlamp (12.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-8pjls" [3fad93b0-8165-43e6-a109-294ea9b042ad] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003602585s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-511560
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-511560 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-511560 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-511560 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f8f17593-07ec-423c-ac3a-fd4aae66f5d2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f8f17593-07ec-423c-ac3a-fd4aae66f5d2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f8f17593-07ec-423c-ac3a-fd4aae66f5d2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004281311s
addons_test.go:891: (dbg) Run:  kubectl --context addons-511560 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 ssh "cat /opt/local-path-provisioner/pvc-e2c10d25-b178-46d7-b7e9-3f699f3ef4aa_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-511560 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-511560 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-511560 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-arm64 -p addons-511560 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.139942418s)
--- PASS: TestAddons/parallel/LocalPath (52.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-s5nv9" [84a64e11-098c-4556-b5de-76b2a3590dd1] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004357799s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-511560
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-sv57l" [1e3433c9-9169-4c3c-bad0-b5d77de495c0] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004330224s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-511560 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-511560 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-511560
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-511560: (10.890983814s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-511560
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-511560
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-511560
--- PASS: TestAddons/StoppedEnableDisable (11.20s)

                                                
                                    
x
+
TestCertOptions (38.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-444088 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-444088 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (35.496264372s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-444088 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-444088 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-444088 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-444088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-444088
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-444088: (2.064262949s)
--- PASS: TestCertOptions (38.26s)

                                                
                                    
x
+
TestCertExpiration (253.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-467540 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0314 19:13:34.165863  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-467540 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (44.828768073s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-467540 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-467540 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (26.643441141s)
helpers_test.go:175: Cleaning up "cert-expiration-467540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-467540
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-467540: (2.224790258s)
--- PASS: TestCertExpiration (253.71s)

                                                
                                    
x
+
TestDockerFlags (46.47s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-334783 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-334783 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.111611605s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-334783 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-334783 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-334783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-334783
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-334783: (2.500511169s)
--- PASS: TestDockerFlags (46.47s)

                                                
                                    
x
+
TestForceSystemdFlag (36.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-543390 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-543390 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (34.275171136s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-543390 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-543390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-543390
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-543390: (2.240426353s)
--- PASS: TestForceSystemdFlag (36.88s)

                                                
                                    
x
+
TestForceSystemdEnv (43.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-014601 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-014601 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.254449286s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-014601 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-014601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-014601
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-014601: (2.24288198s)
--- PASS: TestForceSystemdEnv (43.89s)

                                                
                                    
x
+
TestErrorSpam/setup (32.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-836353 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-836353 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-836353 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-836353 --driver=docker  --container-runtime=docker: (32.08012399s)
--- PASS: TestErrorSpam/setup (32.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 pause
--- PASS: TestErrorSpam/pause (1.30s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 unpause
--- PASS: TestErrorSpam/unpause (1.54s)

                                                
                                    
x
+
TestErrorSpam/stop (2.05s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 stop: (1.823444361s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-836353 --log_dir /tmp/nospam-836353 stop
--- PASS: TestErrorSpam/stop (2.05s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18384-542901/.minikube/files/etc/test/nested/copy/548309/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-455177 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-455177 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m24.390717956s)
--- PASS: TestFunctional/serial/StartWithProxy (84.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.62s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-455177 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-455177 --alsologtostderr -v=8: (36.612909155s)
functional_test.go:659: soft start took 36.616221915s for "functional-455177" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.62s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-455177 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-455177 cache add registry.k8s.io/pause:3.1: (1.051360753s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-455177 cache add registry.k8s.io/pause:3.3: (1.013574589s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-455177 /tmp/TestFunctionalserialCacheCmdcacheadd_local1238065338/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 cache add minikube-local-cache-test:functional-455177
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 cache delete minikube-local-cache-test:functional-455177
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-455177
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh sudo docker rmi registry.k8s.io/pause:latest
E0314 18:40:31.107342  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 18:40:31.114287  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 18:40:31.133659  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 18:40:31.154172  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 18:40:31.194441  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0314 18:40:31.275350  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 18:40:31.435801  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-455177 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (336.743635ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 cache reload
E0314 18:40:31.756657  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0314 18:40:32.397028  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 kubectl -- --context functional-455177 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-455177 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.9s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-455177 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0314 18:40:33.678020  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 18:40:36.238263  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 18:40:41.358907  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 18:40:51.600122  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 18:41:12.080940  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-455177 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.903424716s)
functional_test.go:757: restart took 41.903582148s for "functional-455177" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.90s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-455177 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-455177 logs: (1.239398396s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 logs --file /tmp/TestFunctionalserialLogsFileCmd402706855/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-455177 logs --file /tmp/TestFunctionalserialLogsFileCmd402706855/001/logs.txt: (1.214656931s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.83s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-455177 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-455177
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-455177: exit status 115 (640.647096ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31795 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-455177 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.83s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-455177 config get cpus: exit status 14 (87.167911ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-455177 config get cpus: exit status 14 (90.644723ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-455177 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-455177 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 586281: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.61s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-455177 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-455177 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (195.427292ms)

                                                
                                                
-- stdout --
	* [functional-455177] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:41:58.832049  584973 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:41:58.832298  584973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:41:58.832309  584973 out.go:304] Setting ErrFile to fd 2...
	I0314 18:41:58.832315  584973 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:41:58.832659  584973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
	I0314 18:41:58.833207  584973 out.go:298] Setting JSON to false
	I0314 18:41:58.834490  584973 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12262,"bootTime":1710429457,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 18:41:58.834564  584973 start.go:139] virtualization:  
	I0314 18:41:58.837377  584973 out.go:177] * [functional-455177] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0314 18:41:58.840260  584973 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:41:58.842461  584973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:41:58.840312  584973 notify.go:220] Checking for updates...
	I0314 18:41:58.845984  584973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	I0314 18:41:58.848359  584973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	I0314 18:41:58.850252  584973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0314 18:41:58.851876  584973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:41:58.854319  584973 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:41:58.854826  584973 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:41:58.876959  584973 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 18:41:58.877171  584973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:41:58.946245  584973 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-14 18:41:58.936723585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 18:41:58.946357  584973 docker.go:295] overlay module found
	I0314 18:41:58.948790  584973 out.go:177] * Using the docker driver based on existing profile
	I0314 18:41:58.950529  584973 start.go:297] selected driver: docker
	I0314 18:41:58.950547  584973 start.go:901] validating driver "docker" against &{Name:functional-455177 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-455177 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:41:58.950685  584973 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:41:58.953397  584973 out.go:177] 
	W0314 18:41:58.955280  584973 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0314 18:41:58.956977  584973 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-455177 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-455177 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-455177 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (226.897194ms)

                                                
                                                
-- stdout --
	* [functional-455177] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:41:58.216740  584805 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:41:58.216914  584805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:41:58.216924  584805 out.go:304] Setting ErrFile to fd 2...
	I0314 18:41:58.216930  584805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:41:58.217739  584805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
	I0314 18:41:58.218180  584805 out.go:298] Setting JSON to false
	I0314 18:41:58.219477  584805 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":12262,"bootTime":1710429457,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0314 18:41:58.219576  584805 start.go:139] virtualization:  
	I0314 18:41:58.222726  584805 out.go:177] * [functional-455177] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0314 18:41:58.224816  584805 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:41:58.226789  584805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:41:58.224959  584805 notify.go:220] Checking for updates...
	I0314 18:41:58.230684  584805 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	I0314 18:41:58.233281  584805 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	I0314 18:41:58.235148  584805 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0314 18:41:58.237124  584805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:41:58.239467  584805 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:41:58.240037  584805 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:41:58.265506  584805 docker.go:122] docker version: linux-25.0.4:Docker Engine - Community
	I0314 18:41:58.265627  584805 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:41:58.357758  584805 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-03-14 18:41:58.34236655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 18:41:58.357860  584805 docker.go:295] overlay module found
	I0314 18:41:58.360983  584805 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0314 18:41:58.362729  584805 start.go:297] selected driver: docker
	I0314 18:41:58.362754  584805 start.go:901] validating driver "docker" against &{Name:functional-455177 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-455177 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:41:58.362879  584805 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:41:58.365221  584805 out.go:177] 
	W0314 18:41:58.367463  584805 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0314 18:41:58.369596  584805 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-455177 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-455177 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-lpc75" [79772f54-834d-4114-a74c-f5e9a8d63c7c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-lpc75" [79772f54-834d-4114-a74c-f5e9a8d63c7c] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.003700266s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30155
functional_test.go:1671: http://192.168.49.2:30155: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-lpc75

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30155
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cbcb2a9e-f631-4cbd-a5d1-0baf86889a9b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003796973s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-455177 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-455177 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-455177 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-455177 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d995ac57-0642-490e-b71d-bbeafd7f69d0] Pending
helpers_test.go:344: "sp-pod" [d995ac57-0642-490e-b71d-bbeafd7f69d0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d995ac57-0642-490e-b71d-bbeafd7f69d0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00401449s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-455177 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-455177 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-455177 delete -f testdata/storage-provisioner/pod.yaml: (1.293174603s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-455177 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d9f6bcc8-068b-4d12-9c84-39bee1a5bad9] Pending
helpers_test.go:344: "sp-pod" [d9f6bcc8-068b-4d12-9c84-39bee1a5bad9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d9f6bcc8-068b-4d12-9c84-39bee1a5bad9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003430492s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-455177 exec sp-pod -- ls /tmp/mount
E0314 18:41:53.042464  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh -n functional-455177 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 cp functional-455177:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1620992514/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh -n functional-455177 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh -n functional-455177 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/548309/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "sudo cat /etc/test/nested/copy/548309/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/548309.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "sudo cat /etc/ssl/certs/548309.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/548309.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "sudo cat /usr/share/ca-certificates/548309.pem"
2024/03/14 18:42:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/5483092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "sudo cat /etc/ssl/certs/5483092.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/5483092.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "sudo cat /usr/share/ca-certificates/5483092.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-455177 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-455177 ssh "sudo systemctl is-active crio": exit status 1 (391.316981ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-455177 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-455177 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-455177 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-455177 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 582327: os: process already finished
helpers_test.go:508: unable to kill pid 582176: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-455177 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-455177 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [00078c84-c922-4759-9657-43acd64974b5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [00078c84-c922-4759-9657-43acd64974b5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004527246s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-455177 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.244.224 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-455177 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-455177 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-455177 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-zs45s" [0f47522b-e740-40ec-8cbb-5d975ef212a1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-zs45s" [0f47522b-e740-40ec-8cbb-5d975ef212a1] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004243608s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "368.587293ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "64.399587ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "352.473748ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "96.581821ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-455177 /tmp/TestFunctionalparallelMountCmdany-port328264882/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710441714642765851" to /tmp/TestFunctionalparallelMountCmdany-port328264882/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710441714642765851" to /tmp/TestFunctionalparallelMountCmdany-port328264882/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710441714642765851" to /tmp/TestFunctionalparallelMountCmdany-port328264882/001/test-1710441714642765851
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-455177 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (479.285839ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 14 18:41 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 14 18:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 14 18:41 test-1710441714642765851
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh cat /mount-9p/test-1710441714642765851
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-455177 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e07d9b4a-d5b6-440e-bfc9-1f22a2d1a000] Pending
helpers_test.go:344: "busybox-mount" [e07d9b4a-d5b6-440e-bfc9-1f22a2d1a000] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e07d9b4a-d5b6-440e-bfc9-1f22a2d1a000] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e07d9b4a-d5b6-440e-bfc9-1f22a2d1a000] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004646913s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-455177 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-455177 /tmp/TestFunctionalparallelMountCmdany-port328264882/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 service list -o json
functional_test.go:1490: Took "724.942889ms" to run "out/minikube-linux-arm64 -p functional-455177 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30274
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30274
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-455177 version -o=json --components: (1.206850488s)
--- PASS: TestFunctional/parallel/Version/components (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-455177 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-455177
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-455177
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-455177 image ls --format short --alsologtostderr:
I0314 18:42:23.273717  587610 out.go:291] Setting OutFile to fd 1 ...
I0314 18:42:23.273976  587610 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:42:23.274006  587610 out.go:304] Setting ErrFile to fd 2...
I0314 18:42:23.274026  587610 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:42:23.274308  587610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
I0314 18:42:23.275089  587610 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:42:23.275291  587610 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:42:23.275897  587610 cli_runner.go:164] Run: docker container inspect functional-455177 --format={{.State.Status}}
I0314 18:42:23.295074  587610 ssh_runner.go:195] Run: systemctl --version
I0314 18:42:23.295128  587610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-455177
I0314 18:42:23.314526  587610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/functional-455177/id_rsa Username:docker}
I0314 18:42:23.414328  587610 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-455177 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| gcr.io/google-containers/addon-resizer      | functional-455177 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-455177 | 257da0535211c | 30B    |
| docker.io/library/nginx                     | latest            | 070027a3cbe09 | 192MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | alpine            | be5e6f23a9904 | 43.6MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-455177 image ls --format table --alsologtostderr:
I0314 18:42:24.068869  587758 out.go:291] Setting OutFile to fd 1 ...
I0314 18:42:24.069047  587758 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:42:24.069055  587758 out.go:304] Setting ErrFile to fd 2...
I0314 18:42:24.069062  587758 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:42:24.069346  587758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
I0314 18:42:24.070064  587758 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:42:24.070203  587758 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:42:24.070715  587758 cli_runner.go:164] Run: docker container inspect functional-455177 --format={{.State.Status}}
I0314 18:42:24.093367  587758 ssh_runner.go:195] Run: systemctl --version
I0314 18:42:24.093535  587758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-455177
I0314 18:42:24.118918  587758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/functional-455177/id_rsa Username:docker}
I0314 18:42:24.218843  587758 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-455177 image ls --format json --alsologtostderr:
[{"id":"257da0535211cdd2f309475564893ef29d4c9874df22ef72cacfb88f1b9aed15","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-455177"],"size":"30"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-455177"],"size":"32900000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"20b332c9a
70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f","repoDigests
":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43600000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pau
se:3.3"],"size":"484000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-455177 image ls --format json --alsologtostderr:
I0314 18:42:23.813780  587710 out.go:291] Setting OutFile to fd 1 ...
I0314 18:42:23.814027  587710 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:42:23.814098  587710 out.go:304] Setting ErrFile to fd 2...
I0314 18:42:23.814120  587710 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:42:23.814385  587710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
I0314 18:42:23.815098  587710 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:42:23.815477  587710 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:42:23.815999  587710 cli_runner.go:164] Run: docker container inspect functional-455177 --format={{.State.Status}}
I0314 18:42:23.836113  587710 ssh_runner.go:195] Run: systemctl --version
I0314 18:42:23.836170  587710 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-455177
I0314 18:42:23.857256  587710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/functional-455177/id_rsa Username:docker}
I0314 18:42:23.954020  587710 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-455177 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 257da0535211cdd2f309475564893ef29d4c9874df22ef72cacfb88f1b9aed15
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-455177
size: "30"
- id: be5e6f23a9904ed26efa7a49fb3d5e63d1c488dbb7b5134e869488afd747ec3f
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43600000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-455177
size: "32900000"
- id: 070027a3cbe09ac697570e31174acc1699701bd0626d2cf71e01623f41a10f53
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-455177 image ls --format yaml --alsologtostderr:
I0314 18:42:23.521971  587635 out.go:291] Setting OutFile to fd 1 ...
I0314 18:42:23.522109  587635 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:42:23.522142  587635 out.go:304] Setting ErrFile to fd 2...
I0314 18:42:23.522151  587635 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:42:23.522416  587635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
I0314 18:42:23.523216  587635 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:42:23.523491  587635 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:42:23.524143  587635 cli_runner.go:164] Run: docker container inspect functional-455177 --format={{.State.Status}}
I0314 18:42:23.551490  587635 ssh_runner.go:195] Run: systemctl --version
I0314 18:42:23.551549  587635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-455177
I0314 18:42:23.585272  587635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/functional-455177/id_rsa Username:docker}
I0314 18:42:23.690075  587635 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-455177 ssh pgrep buildkitd: exit status 1 (342.90762ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image build -t localhost/my-image:functional-455177 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-455177 image build -t localhost/my-image:functional-455177 testdata/build --alsologtostderr: (2.186039404s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-455177 image build -t localhost/my-image:functional-455177 testdata/build --alsologtostderr:
I0314 18:42:23.926230  587730 out.go:291] Setting OutFile to fd 1 ...
I0314 18:42:23.927494  587730 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:42:23.927536  587730 out.go:304] Setting ErrFile to fd 2...
I0314 18:42:23.927568  587730 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:42:23.927934  587730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
I0314 18:42:23.928616  587730 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:42:23.929961  587730 config.go:182] Loaded profile config "functional-455177": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 18:42:23.930597  587730 cli_runner.go:164] Run: docker container inspect functional-455177 --format={{.State.Status}}
I0314 18:42:23.946774  587730 ssh_runner.go:195] Run: systemctl --version
I0314 18:42:23.946832  587730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-455177
I0314 18:42:23.973601  587730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/functional-455177/id_rsa Username:docker}
I0314 18:42:24.090393  587730 build_images.go:161] Building image from path: /tmp/build.1707418894.tar
I0314 18:42:24.090480  587730 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0314 18:42:24.105338  587730 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1707418894.tar
I0314 18:42:24.110226  587730 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1707418894.tar: stat -c "%s %y" /var/lib/minikube/build/build.1707418894.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1707418894.tar': No such file or directory
I0314 18:42:24.110267  587730 ssh_runner.go:362] scp /tmp/build.1707418894.tar --> /var/lib/minikube/build/build.1707418894.tar (3072 bytes)
I0314 18:42:24.146439  587730 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1707418894
I0314 18:42:24.157919  587730 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1707418894 -xf /var/lib/minikube/build/build.1707418894.tar
I0314 18:42:24.169742  587730 docker.go:360] Building image: /var/lib/minikube/build/build.1707418894
I0314 18:42:24.169825  587730 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-455177 /var/lib/minikube/build/build.1707418894
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:ff7df39c573dfe16f086a7f86832d550336ff55dfe16335cfbeb84dd813d7b7c done
#8 naming to localhost/my-image:functional-455177 done
#8 DONE 0.0s
I0314 18:42:25.999398  587730 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-455177 /var/lib/minikube/build/build.1707418894: (1.82953952s)
I0314 18:42:25.999487  587730 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1707418894
I0314 18:42:26.011714  587730 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1707418894.tar
I0314 18:42:26.023923  587730 build_images.go:217] Built localhost/my-image:functional-455177 from /tmp/build.1707418894.tar
I0314 18:42:26.023966  587730 build_images.go:133] succeeded building to: functional-455177
I0314 18:42:26.023972  587730 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.955451521s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-455177
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image load --daemon gcr.io/google-containers/addon-resizer:functional-455177 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-455177 image load --daemon gcr.io/google-containers/addon-resizer:functional-455177 --alsologtostderr: (4.239267388s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-455177 /tmp/TestFunctionalparallelMountCmdspecific-port1407863711/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-455177 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (485.007644ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-455177 /tmp/TestFunctionalparallelMountCmdspecific-port1407863711/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-455177 ssh "sudo umount -f /mount-9p": exit status 1 (373.831858ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-455177 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-455177 /tmp/TestFunctionalparallelMountCmdspecific-port1407863711/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-455177 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1980626587/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-455177 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1980626587/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-455177 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1980626587/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-455177 ssh "findmnt -T" /mount1: exit status 1 (975.82862ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-455177 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-455177 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1980626587/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-455177 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1980626587/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-455177 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1980626587/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image load --daemon gcr.io/google-containers/addon-resizer:functional-455177 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-455177 image load --daemon gcr.io/google-containers/addon-resizer:functional-455177 --alsologtostderr: (2.983982068s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.835252735s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-455177
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image load --daemon gcr.io/google-containers/addon-resizer:functional-455177 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-455177 image load --daemon gcr.io/google-containers/addon-resizer:functional-455177 --alsologtostderr: (4.029047552s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image save gcr.io/google-containers/addon-resizer:functional-455177 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-455177 image save gcr.io/google-containers/addon-resizer:functional-455177 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.620581407s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image rm gcr.io/google-containers/addon-resizer:functional-455177 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-455177 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.237375976s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-455177
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 image save --daemon gcr.io/google-containers/addon-resizer:functional-455177 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-455177
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-455177 docker-env) && out/minikube-linux-arm64 status -p functional-455177"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-455177 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-455177 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-455177
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-455177
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-455177
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (145.03s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-658588 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0314 18:43:14.962629  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-658588 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m24.164313701s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (145.03s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (7.45s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-658588 -- rollout status deployment/busybox: (3.568428742s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-2q2tm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-bnq52 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-cqksq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-2q2tm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-bnq52 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-cqksq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-2q2tm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-bnq52 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-cqksq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (7.45s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.96s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-2q2tm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-2q2tm -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-bnq52 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-bnq52 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-cqksq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-658588 -- exec busybox-5b5d89c9d6-cqksq -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.96s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (27.1s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-658588 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-658588 -v=7 --alsologtostderr: (25.894553731s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr: (1.206906939s)
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (27.10s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-658588 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0314 18:45:31.105901  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (21.32s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-658588 status --output json -v=7 --alsologtostderr: (1.127780069s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp testdata/cp-test.txt ha-658588:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3239389764/001/cp-test_ha-658588.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588:/home/docker/cp-test.txt ha-658588-m02:/home/docker/cp-test_ha-658588_ha-658588-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m02 "sudo cat /home/docker/cp-test_ha-658588_ha-658588-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588:/home/docker/cp-test.txt ha-658588-m03:/home/docker/cp-test_ha-658588_ha-658588-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m03 "sudo cat /home/docker/cp-test_ha-658588_ha-658588-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588:/home/docker/cp-test.txt ha-658588-m04:/home/docker/cp-test_ha-658588_ha-658588-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m04 "sudo cat /home/docker/cp-test_ha-658588_ha-658588-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp testdata/cp-test.txt ha-658588-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3239389764/001/cp-test_ha-658588-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m02:/home/docker/cp-test.txt ha-658588:/home/docker/cp-test_ha-658588-m02_ha-658588.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588 "sudo cat /home/docker/cp-test_ha-658588-m02_ha-658588.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m02:/home/docker/cp-test.txt ha-658588-m03:/home/docker/cp-test_ha-658588-m02_ha-658588-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m03 "sudo cat /home/docker/cp-test_ha-658588-m02_ha-658588-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m02:/home/docker/cp-test.txt ha-658588-m04:/home/docker/cp-test_ha-658588-m02_ha-658588-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m04 "sudo cat /home/docker/cp-test_ha-658588-m02_ha-658588-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp testdata/cp-test.txt ha-658588-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3239389764/001/cp-test_ha-658588-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m03:/home/docker/cp-test.txt ha-658588:/home/docker/cp-test_ha-658588-m03_ha-658588.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588 "sudo cat /home/docker/cp-test_ha-658588-m03_ha-658588.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m03:/home/docker/cp-test.txt ha-658588-m02:/home/docker/cp-test_ha-658588-m03_ha-658588-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m02 "sudo cat /home/docker/cp-test_ha-658588-m03_ha-658588-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m03:/home/docker/cp-test.txt ha-658588-m04:/home/docker/cp-test_ha-658588-m03_ha-658588-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m04 "sudo cat /home/docker/cp-test_ha-658588-m03_ha-658588-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp testdata/cp-test.txt ha-658588-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3239389764/001/cp-test_ha-658588-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m04:/home/docker/cp-test.txt ha-658588:/home/docker/cp-test_ha-658588-m04_ha-658588.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588 "sudo cat /home/docker/cp-test_ha-658588-m04_ha-658588.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m04:/home/docker/cp-test.txt ha-658588-m02:/home/docker/cp-test_ha-658588-m04_ha-658588-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m02 "sudo cat /home/docker/cp-test_ha-658588-m04_ha-658588-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 cp ha-658588-m04:/home/docker/cp-test.txt ha-658588-m03:/home/docker/cp-test_ha-658588-m04_ha-658588-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 ssh -n ha-658588-m03 "sudo cat /home/docker/cp-test_ha-658588-m04_ha-658588-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (21.32s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (11.72s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 node stop m02 -v=7 --alsologtostderr
E0314 18:45:58.805340  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-658588 node stop m02 -v=7 --alsologtostderr: (10.943641086s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr: exit status 7 (778.897987ms)

                                                
                                                
-- stdout --
	ha-658588
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-658588-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-658588-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-658588-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:46:03.734583  610632 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:46:03.734696  610632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:46:03.734706  610632 out.go:304] Setting ErrFile to fd 2...
	I0314 18:46:03.734712  610632 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:46:03.734961  610632 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
	I0314 18:46:03.735166  610632 out.go:298] Setting JSON to false
	I0314 18:46:03.735193  610632 mustload.go:65] Loading cluster: ha-658588
	I0314 18:46:03.735253  610632 notify.go:220] Checking for updates...
	I0314 18:46:03.735632  610632 config.go:182] Loaded profile config "ha-658588": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:46:03.735645  610632 status.go:255] checking status of ha-658588 ...
	I0314 18:46:03.737511  610632 cli_runner.go:164] Run: docker container inspect ha-658588 --format={{.State.Status}}
	I0314 18:46:03.764262  610632 status.go:330] ha-658588 host status = "Running" (err=<nil>)
	I0314 18:46:03.764286  610632 host.go:66] Checking if "ha-658588" exists ...
	I0314 18:46:03.764583  610632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-658588
	I0314 18:46:03.795642  610632 host.go:66] Checking if "ha-658588" exists ...
	I0314 18:46:03.796121  610632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:46:03.796206  610632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-658588
	I0314 18:46:03.816807  610632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/ha-658588/id_rsa Username:docker}
	I0314 18:46:03.914749  610632 ssh_runner.go:195] Run: systemctl --version
	I0314 18:46:03.919014  610632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:46:03.931945  610632 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 18:46:03.996429  610632 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:true NGoroutines:76 SystemTime:2024-03-14 18:46:03.986359715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 18:46:03.999383  610632 kubeconfig.go:125] found "ha-658588" server: "https://192.168.49.254:8443"
	I0314 18:46:03.999437  610632 api_server.go:166] Checking apiserver status ...
	I0314 18:46:03.999506  610632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:46:04.014019  610632 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2266/cgroup
	I0314 18:46:04.024482  610632 api_server.go:182] apiserver freezer: "10:freezer:/docker/1650ea398cf2f05715b9f66834d8a6473307fde05989da3c7550fd49809da0ec/kubepods/burstable/podba94933b838901b4dfc3eac5b9c17311/da05a9a91fa566329f26372bb890b83391b9a9666a6c9e06336db99a1e509376"
	I0314 18:46:04.024555  610632 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1650ea398cf2f05715b9f66834d8a6473307fde05989da3c7550fd49809da0ec/kubepods/burstable/podba94933b838901b4dfc3eac5b9c17311/da05a9a91fa566329f26372bb890b83391b9a9666a6c9e06336db99a1e509376/freezer.state
	I0314 18:46:04.033174  610632 api_server.go:204] freezer state: "THAWED"
	I0314 18:46:04.033220  610632 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0314 18:46:04.042480  610632 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0314 18:46:04.042514  610632 status.go:422] ha-658588 apiserver status = Running (err=<nil>)
	I0314 18:46:04.042528  610632 status.go:257] ha-658588 status: &{Name:ha-658588 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:46:04.042546  610632 status.go:255] checking status of ha-658588-m02 ...
	I0314 18:46:04.042887  610632 cli_runner.go:164] Run: docker container inspect ha-658588-m02 --format={{.State.Status}}
	I0314 18:46:04.059974  610632 status.go:330] ha-658588-m02 host status = "Stopped" (err=<nil>)
	I0314 18:46:04.059998  610632 status.go:343] host is not running, skipping remaining checks
	I0314 18:46:04.060006  610632 status.go:257] ha-658588-m02 status: &{Name:ha-658588-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:46:04.060041  610632 status.go:255] checking status of ha-658588-m03 ...
	I0314 18:46:04.060408  610632 cli_runner.go:164] Run: docker container inspect ha-658588-m03 --format={{.State.Status}}
	I0314 18:46:04.077245  610632 status.go:330] ha-658588-m03 host status = "Running" (err=<nil>)
	I0314 18:46:04.077272  610632 host.go:66] Checking if "ha-658588-m03" exists ...
	I0314 18:46:04.077742  610632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-658588-m03
	I0314 18:46:04.097702  610632 host.go:66] Checking if "ha-658588-m03" exists ...
	I0314 18:46:04.097999  610632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:46:04.098043  610632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-658588-m03
	I0314 18:46:04.115611  610632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/ha-658588-m03/id_rsa Username:docker}
	I0314 18:46:04.212181  610632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:46:04.224566  610632 kubeconfig.go:125] found "ha-658588" server: "https://192.168.49.254:8443"
	I0314 18:46:04.224635  610632 api_server.go:166] Checking apiserver status ...
	I0314 18:46:04.224705  610632 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:46:04.236122  610632 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2176/cgroup
	I0314 18:46:04.248267  610632 api_server.go:182] apiserver freezer: "10:freezer:/docker/4c81a5aca93975c90dd798eed720d79cf4d51eca679256c5c399bd3281735536/kubepods/burstable/pod6903f5cbb4c8dd612e43f07d9d889b56/ef777be5a4894332e8bc31faaf001f1227caf2cc0f196218300046628ad9023f"
	I0314 18:46:04.248362  610632 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4c81a5aca93975c90dd798eed720d79cf4d51eca679256c5c399bd3281735536/kubepods/burstable/pod6903f5cbb4c8dd612e43f07d9d889b56/ef777be5a4894332e8bc31faaf001f1227caf2cc0f196218300046628ad9023f/freezer.state
	I0314 18:46:04.257564  610632 api_server.go:204] freezer state: "THAWED"
	I0314 18:46:04.257591  610632 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0314 18:46:04.266359  610632 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0314 18:46:04.266392  610632 status.go:422] ha-658588-m03 apiserver status = Running (err=<nil>)
	I0314 18:46:04.266402  610632 status.go:257] ha-658588-m03 status: &{Name:ha-658588-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:46:04.266418  610632 status.go:255] checking status of ha-658588-m04 ...
	I0314 18:46:04.266715  610632 cli_runner.go:164] Run: docker container inspect ha-658588-m04 --format={{.State.Status}}
	I0314 18:46:04.288284  610632 status.go:330] ha-658588-m04 host status = "Running" (err=<nil>)
	I0314 18:46:04.288370  610632 host.go:66] Checking if "ha-658588-m04" exists ...
	I0314 18:46:04.288678  610632 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-658588-m04
	I0314 18:46:04.305052  610632 host.go:66] Checking if "ha-658588-m04" exists ...
	I0314 18:46:04.305374  610632 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:46:04.305413  610632 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-658588-m04
	I0314 18:46:04.324520  610632 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/ha-658588-m04/id_rsa Username:docker}
	I0314 18:46:04.422672  610632 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:46:04.435722  610632 status.go:257] ha-658588-m04 status: &{Name:ha-658588-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopSecondaryNode (11.72s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (39.63s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 node start m02 -v=7 --alsologtostderr
E0314 18:46:24.430456  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:46:24.435693  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:46:24.445951  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:46:24.466215  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:46:24.506410  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:46:24.587481  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:46:24.747925  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:46:25.068440  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:46:25.708624  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:46:26.989239  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:46:29.549932  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:46:34.670530  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-658588 node start m02 -v=7 --alsologtostderr: (38.293381711s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr: (1.230948533s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMutliControlPlane/serial/RestartSecondaryNode (39.63s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.54s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0314 18:46:44.911282  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (6.536527808s)
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.54s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (243.26s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-658588 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-658588 -v=7 --alsologtostderr
E0314 18:47:05.391571  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-658588 -v=7 --alsologtostderr: (34.312407234s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-658588 --wait=true -v=7 --alsologtostderr
E0314 18:47:46.351788  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:49:08.272262  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 18:50:31.105830  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-658588 --wait=true -v=7 --alsologtostderr: (3m28.751923401s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-658588
--- PASS: TestMutliControlPlane/serial/RestartClusterKeepsNodes (243.26s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (8.93s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-658588 node delete m03 -v=7 --alsologtostderr: (7.96007074s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/DeleteSecondaryNode (8.93s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (23.51s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 stop -v=7 --alsologtostderr
E0314 18:51:24.430494  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-658588 stop -v=7 --alsologtostderr: (23.394456114s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr: exit status 7 (112.330911ms)

                                                
                                                
-- stdout --
	ha-658588
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-658588-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-658588-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:51:27.379652  639270 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:51:27.379869  639270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:51:27.379896  639270 out.go:304] Setting ErrFile to fd 2...
	I0314 18:51:27.379916  639270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:51:27.380185  639270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
	I0314 18:51:27.380430  639270 out.go:298] Setting JSON to false
	I0314 18:51:27.380484  639270 mustload.go:65] Loading cluster: ha-658588
	I0314 18:51:27.380530  639270 notify.go:220] Checking for updates...
	I0314 18:51:27.381003  639270 config.go:182] Loaded profile config "ha-658588": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 18:51:27.381327  639270 status.go:255] checking status of ha-658588 ...
	I0314 18:51:27.382099  639270 cli_runner.go:164] Run: docker container inspect ha-658588 --format={{.State.Status}}
	I0314 18:51:27.398151  639270 status.go:330] ha-658588 host status = "Stopped" (err=<nil>)
	I0314 18:51:27.398172  639270 status.go:343] host is not running, skipping remaining checks
	I0314 18:51:27.398180  639270 status.go:257] ha-658588 status: &{Name:ha-658588 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:51:27.398224  639270 status.go:255] checking status of ha-658588-m02 ...
	I0314 18:51:27.398529  639270 cli_runner.go:164] Run: docker container inspect ha-658588-m02 --format={{.State.Status}}
	I0314 18:51:27.414532  639270 status.go:330] ha-658588-m02 host status = "Stopped" (err=<nil>)
	I0314 18:51:27.414573  639270 status.go:343] host is not running, skipping remaining checks
	I0314 18:51:27.414583  639270 status.go:257] ha-658588-m02 status: &{Name:ha-658588-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:51:27.414603  639270 status.go:255] checking status of ha-658588-m04 ...
	I0314 18:51:27.414900  639270 cli_runner.go:164] Run: docker container inspect ha-658588-m04 --format={{.State.Status}}
	I0314 18:51:27.430262  639270 status.go:330] ha-658588-m04 host status = "Stopped" (err=<nil>)
	I0314 18:51:27.430284  639270 status.go:343] host is not running, skipping remaining checks
	I0314 18:51:27.430292  639270 status.go:257] ha-658588-m04 status: &{Name:ha-658588-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMutliControlPlane/serial/StopCluster (23.51s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (93.88s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-658588 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0314 18:51:52.112572  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-658588 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m32.852789486s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (93.88s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.58s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (45.73s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-658588 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-658588 --control-plane -v=7 --alsologtostderr: (44.497486373s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-658588 status -v=7 --alsologtostderr: (1.229365285s)
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (45.73s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-584609 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-584609 --driver=docker  --container-runtime=docker: (31.728085113s)
--- PASS: TestImageBuild/serial/Setup (31.73s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-584609
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-584609: (1.979975416s)
--- PASS: TestImageBuild/serial/NormalBuild (1.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-584609
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-584609
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-584609
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.47s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-805374 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0314 18:55:31.106395  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-805374 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (56.463001518s)
--- PASS: TestJSONOutput/start/Command (56.47s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-805374 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-805374 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-805374 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-805374 --output=json --user=testUser: (5.776190363s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-201967 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-201967 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.529978ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d781a3ab-8443-40b6-b92a-576f89605ec2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-201967] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8edcfc5b-c1e6-416b-a8f4-b66ff754da14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18384"}}
	{"specversion":"1.0","id":"cc246e1e-8c7b-44bc-8842-86b408a2cf26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"61eb40eb-7fbb-420e-9608-8b5fa487856c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig"}}
	{"specversion":"1.0","id":"ad104406-ec8d-4b94-862c-5e6815d7a28e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube"}}
	{"specversion":"1.0","id":"a0cfe992-d41e-40b9-b5d7-8c678500ebbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e0057c23-ae47-4edb-804c-7e5a44bc1e4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4c2008fa-85f2-495a-877a-8289d438b4b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-201967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-201967
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-895259 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-895259 --network=: (31.375620849s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-895259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-895259
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-895259: (2.101692386s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-494266 --network=bridge
E0314 18:56:24.432085  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-494266 --network=bridge: (31.494841287s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-494266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-494266
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-494266: (1.976985504s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.49s)

                                                
                                    
x
+
TestKicExistingNetwork (38.51s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-132432 --network=existing-network
E0314 18:56:54.165658  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-132432 --network=existing-network: (36.325987205s)
helpers_test.go:175: Cleaning up "existing-network-132432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-132432
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-132432: (2.037114153s)
--- PASS: TestKicExistingNetwork (38.51s)

                                                
                                    
x
+
TestKicCustomSubnet (39.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-031407 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-031407 --subnet=192.168.60.0/24: (37.314930648s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-031407 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-031407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-031407
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-031407: (2.120089616s)
--- PASS: TestKicCustomSubnet (39.46s)

                                                
                                    
x
+
TestKicStaticIP (36.73s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-108923 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-108923 --static-ip=192.168.200.200: (34.460630994s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-108923 ip
helpers_test.go:175: Cleaning up "static-ip-108923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-108923
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-108923: (2.102121494s)
--- PASS: TestKicStaticIP (36.73s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (76.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-493049 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-493049 --driver=docker  --container-runtime=docker: (37.52299436s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-495953 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-495953 --driver=docker  --container-runtime=docker: (33.181209421s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-493049
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-495953
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-495953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-495953
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-495953: (2.108017765s)
helpers_test.go:175: Cleaning up "first-493049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-493049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-493049: (2.186250204s)
--- PASS: TestMinikubeProfile (76.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-380867 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-380867 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.296357842s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-380867 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-395603 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-395603 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.602279352s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-395603 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-380867 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-380867 --alsologtostderr -v=5: (1.483156413s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-395603 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-395603
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-395603: (1.219481007s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-395603
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-395603: (7.32042639s)
--- PASS: TestMountStart/serial/RestartStopped (8.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-395603 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-199663 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0314 19:00:31.106483  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 19:01:24.429761  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-199663 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m20.906192377s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (36.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-199663 -- rollout status deployment/busybox: (2.469366165s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- exec busybox-5b5d89c9d6-p9qv9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- exec busybox-5b5d89c9d6-s4h8j -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- exec busybox-5b5d89c9d6-p9qv9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- exec busybox-5b5d89c9d6-s4h8j -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- exec busybox-5b5d89c9d6-p9qv9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- exec busybox-5b5d89c9d6-s4h8j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (36.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- exec busybox-5b5d89c9d6-p9qv9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- exec busybox-5b5d89c9d6-p9qv9 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- exec busybox-5b5d89c9d6-s4h8j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-199663 -- exec busybox-5b5d89c9d6-s4h8j -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.16s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-199663 -v 3 --alsologtostderr
E0314 19:02:47.473666  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-199663 -v 3 --alsologtostderr: (19.835473984s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.61s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-199663 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp testdata/cp-test.txt multinode-199663:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp multinode-199663:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1625950913/001/cp-test_multinode-199663.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp multinode-199663:/home/docker/cp-test.txt multinode-199663-m02:/home/docker/cp-test_multinode-199663_multinode-199663-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m02 "sudo cat /home/docker/cp-test_multinode-199663_multinode-199663-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp multinode-199663:/home/docker/cp-test.txt multinode-199663-m03:/home/docker/cp-test_multinode-199663_multinode-199663-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m03 "sudo cat /home/docker/cp-test_multinode-199663_multinode-199663-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp testdata/cp-test.txt multinode-199663-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp multinode-199663-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1625950913/001/cp-test_multinode-199663-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp multinode-199663-m02:/home/docker/cp-test.txt multinode-199663:/home/docker/cp-test_multinode-199663-m02_multinode-199663.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663 "sudo cat /home/docker/cp-test_multinode-199663-m02_multinode-199663.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp multinode-199663-m02:/home/docker/cp-test.txt multinode-199663-m03:/home/docker/cp-test_multinode-199663-m02_multinode-199663-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m03 "sudo cat /home/docker/cp-test_multinode-199663-m02_multinode-199663-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp testdata/cp-test.txt multinode-199663-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp multinode-199663-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1625950913/001/cp-test_multinode-199663-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp multinode-199663-m03:/home/docker/cp-test.txt multinode-199663:/home/docker/cp-test_multinode-199663-m03_multinode-199663.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663 "sudo cat /home/docker/cp-test_multinode-199663-m03_multinode-199663.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 cp multinode-199663-m03:/home/docker/cp-test.txt multinode-199663-m02:/home/docker/cp-test_multinode-199663-m03_multinode-199663-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 ssh -n multinode-199663-m02 "sudo cat /home/docker/cp-test_multinode-199663-m03_multinode-199663-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-199663 node stop m03: (1.243555718s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-199663 status: exit status 7 (552.753276ms)

                                                
                                                
-- stdout --
	multinode-199663
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-199663-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-199663-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-199663 status --alsologtostderr: exit status 7 (549.871937ms)

                                                
                                                
-- stdout --
	multinode-199663
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-199663-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-199663-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 19:03:02.536786  713490 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:03:02.536961  713490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:03:02.536983  713490 out.go:304] Setting ErrFile to fd 2...
	I0314 19:03:02.537006  713490 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:03:02.537284  713490 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
	I0314 19:03:02.537563  713490 out.go:298] Setting JSON to false
	I0314 19:03:02.537619  713490 mustload.go:65] Loading cluster: multinode-199663
	I0314 19:03:02.538055  713490 config.go:182] Loaded profile config "multinode-199663": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:03:02.538099  713490 status.go:255] checking status of multinode-199663 ...
	I0314 19:03:02.538690  713490 cli_runner.go:164] Run: docker container inspect multinode-199663 --format={{.State.Status}}
	I0314 19:03:02.539931  713490 notify.go:220] Checking for updates...
	I0314 19:03:02.562873  713490 status.go:330] multinode-199663 host status = "Running" (err=<nil>)
	I0314 19:03:02.562927  713490 host.go:66] Checking if "multinode-199663" exists ...
	I0314 19:03:02.563217  713490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-199663
	I0314 19:03:02.582515  713490 host.go:66] Checking if "multinode-199663" exists ...
	I0314 19:03:02.582940  713490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 19:03:02.582987  713490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-199663
	I0314 19:03:02.600464  713490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33649 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/multinode-199663/id_rsa Username:docker}
	I0314 19:03:02.698673  713490 ssh_runner.go:195] Run: systemctl --version
	I0314 19:03:02.703160  713490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:03:02.715023  713490 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0314 19:03:02.779353  713490 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:66 SystemTime:2024-03-14 19:03:02.770038931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1055-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:25.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.13.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.7]] Warnings:<nil>}}
	I0314 19:03:02.780011  713490 kubeconfig.go:125] found "multinode-199663" server: "https://192.168.58.2:8443"
	I0314 19:03:02.780058  713490 api_server.go:166] Checking apiserver status ...
	I0314 19:03:02.780129  713490 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:03:02.792200  713490 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2250/cgroup
	I0314 19:03:02.801941  713490 api_server.go:182] apiserver freezer: "10:freezer:/docker/d196d9f627f1a99b7fed48f1ed3c0fb7b007e840dc0e983364b8ee591236fe76/kubepods/burstable/podf5c83d801367ef13ca3196d6eb7bb425/33e41437e3b189f83111499fa9df850684ab7754e35d88ac05bbecbe81d1c920"
	I0314 19:03:02.802018  713490 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d196d9f627f1a99b7fed48f1ed3c0fb7b007e840dc0e983364b8ee591236fe76/kubepods/burstable/podf5c83d801367ef13ca3196d6eb7bb425/33e41437e3b189f83111499fa9df850684ab7754e35d88ac05bbecbe81d1c920/freezer.state
	I0314 19:03:02.811208  713490 api_server.go:204] freezer state: "THAWED"
	I0314 19:03:02.811239  713490 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0314 19:03:02.819977  713490 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0314 19:03:02.820053  713490 status.go:422] multinode-199663 apiserver status = Running (err=<nil>)
	I0314 19:03:02.820080  713490 status.go:257] multinode-199663 status: &{Name:multinode-199663 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 19:03:02.820122  713490 status.go:255] checking status of multinode-199663-m02 ...
	I0314 19:03:02.820459  713490 cli_runner.go:164] Run: docker container inspect multinode-199663-m02 --format={{.State.Status}}
	I0314 19:03:02.836906  713490 status.go:330] multinode-199663-m02 host status = "Running" (err=<nil>)
	I0314 19:03:02.836936  713490 host.go:66] Checking if "multinode-199663-m02" exists ...
	I0314 19:03:02.837268  713490 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-199663-m02
	I0314 19:03:02.854169  713490 host.go:66] Checking if "multinode-199663-m02" exists ...
	I0314 19:03:02.854543  713490 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 19:03:02.854600  713490 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-199663-m02
	I0314 19:03:02.870686  713490 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33654 SSHKeyPath:/home/jenkins/minikube-integration/18384-542901/.minikube/machines/multinode-199663-m02/id_rsa Username:docker}
	I0314 19:03:02.966513  713490 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:03:02.978254  713490 status.go:257] multinode-199663-m02 status: &{Name:multinode-199663-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0314 19:03:02.978290  713490 status.go:255] checking status of multinode-199663-m03 ...
	I0314 19:03:02.978588  713490 cli_runner.go:164] Run: docker container inspect multinode-199663-m03 --format={{.State.Status}}
	I0314 19:03:02.995217  713490 status.go:330] multinode-199663-m03 host status = "Stopped" (err=<nil>)
	I0314 19:03:02.995243  713490 status.go:343] host is not running, skipping remaining checks
	I0314 19:03:02.995251  713490 status.go:257] multinode-199663-m03 status: &{Name:multinode-199663-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-199663 node start m03 -v=7 --alsologtostderr: (10.924757657s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (70.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-199663
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-199663
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-199663: (22.603675405s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-199663 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-199663 --wait=true -v=8 --alsologtostderr: (47.656275679s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-199663
--- PASS: TestMultiNode/serial/RestartKeepsNodes (70.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-199663 node delete m03: (4.858667943s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-199663 stop: (21.470893447s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-199663 status: exit status 7 (101.639174ms)

                                                
                                                
-- stdout --
	multinode-199663
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-199663-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-199663 status --alsologtostderr: exit status 7 (102.565964ms)

                                                
                                                
-- stdout --
	multinode-199663
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-199663-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 19:04:52.393461  726142 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:04:52.393577  726142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:04:52.393588  726142 out.go:304] Setting ErrFile to fd 2...
	I0314 19:04:52.393593  726142 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:04:52.393871  726142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-542901/.minikube/bin
	I0314 19:04:52.394060  726142 out.go:298] Setting JSON to false
	I0314 19:04:52.394087  726142 mustload.go:65] Loading cluster: multinode-199663
	I0314 19:04:52.394187  726142 notify.go:220] Checking for updates...
	I0314 19:04:52.394496  726142 config.go:182] Loaded profile config "multinode-199663": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 19:04:52.394507  726142 status.go:255] checking status of multinode-199663 ...
	I0314 19:04:52.394962  726142 cli_runner.go:164] Run: docker container inspect multinode-199663 --format={{.State.Status}}
	I0314 19:04:52.412531  726142 status.go:330] multinode-199663 host status = "Stopped" (err=<nil>)
	I0314 19:04:52.412610  726142 status.go:343] host is not running, skipping remaining checks
	I0314 19:04:52.412620  726142 status.go:257] multinode-199663 status: &{Name:multinode-199663 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 19:04:52.412663  726142 status.go:255] checking status of multinode-199663-m02 ...
	I0314 19:04:52.412975  726142 cli_runner.go:164] Run: docker container inspect multinode-199663-m02 --format={{.State.Status}}
	I0314 19:04:52.428593  726142 status.go:330] multinode-199663-m02 host status = "Stopped" (err=<nil>)
	I0314 19:04:52.428614  726142 status.go:343] host is not running, skipping remaining checks
	I0314 19:04:52.428621  726142 status.go:257] multinode-199663-m02 status: &{Name:multinode-199663-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (32.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-199663 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-199663 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (31.5048318s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-199663 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (32.21s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-199663
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-199663-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-199663-m02 --driver=docker  --container-runtime=docker: exit status 14 (92.607387ms)

                                                
                                                
-- stdout --
	* [multinode-199663-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-199663-m02' is duplicated with machine name 'multinode-199663-m02' in profile 'multinode-199663'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-199663-m03 --driver=docker  --container-runtime=docker
E0314 19:05:31.105771  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-199663-m03 --driver=docker  --container-runtime=docker: (32.262997656s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-199663
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-199663: exit status 80 (346.257595ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-199663 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-199663-m03 already exists in multinode-199663-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-199663-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-199663-m03: (2.051036312s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.82s)

                                                
                                    
x
+
TestPreload (154.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-835643 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0314 19:06:24.430696  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-835643 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m45.492529906s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-835643 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-835643 image pull gcr.io/k8s-minikube/busybox: (1.347268075s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-835643
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-835643: (10.84814304s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-835643 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-835643 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (34.823344756s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-835643 image list
helpers_test.go:175: Cleaning up "test-preload-835643" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-835643
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-835643: (2.132467292s)
--- PASS: TestPreload (154.89s)

                                                
                                    
x
+
TestSkaffold (124.41s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3405591373 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-505499 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-505499 --memory=2600 --driver=docker  --container-runtime=docker: (34.363146561s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3405591373 run --minikube-profile skaffold-505499 --kube-context skaffold-505499 --status-check=true --port-forward=false --interactive=false
E0314 19:10:31.106473  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3405591373 run --minikube-profile skaffold-505499 --kube-context skaffold-505499 --status-check=true --port-forward=false --interactive=false: (1m14.221909808s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5b6dd79b8b-j577p" [98fd2d81-e6b0-452a-8c4c-3a96b05d1ec7] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003710799s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-8469cb4d97-hqzjl" [99bdf1cd-5d33-4102-9ecb-c6540f892bc1] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00414994s
helpers_test.go:175: Cleaning up "skaffold-505499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-505499
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-505499: (3.135831923s)
--- PASS: TestSkaffold (124.41s)

                                                
                                    
x
+
TestInsufficientStorage (11.74s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-861100 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0314 19:11:24.430615  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-861100 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.404263681s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"35159e0f-37d6-4465-a64c-7922df418dbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-861100] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d77e0f43-46be-4679-b6e9-084ff88d169e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18384"}}
	{"specversion":"1.0","id":"9c3661d4-0b78-42e0-b173-755fb885953b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c172652f-c4d8-4a48-b763-7117999d38aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig"}}
	{"specversion":"1.0","id":"c25ef2fe-a412-44e8-8bc3-30b74ca20689","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube"}}
	{"specversion":"1.0","id":"b311422a-c1a8-4327-8f44-1608b4bea1bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4e6c17e6-e5ea-4b1f-89dd-8626f241cc18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0131868d-66e8-426d-b1fc-38b6d8df1750","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c924b43e-cb42-4d9e-b04c-ca66d55a87c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"5599b332-0130-47d8-a4b5-aa12cacfb037","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"39920bd3-7ff7-4f06-bf04-e2adf29b54ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b18c5666-c170-4a04-8725-910b0c04f2fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-861100\" primary control-plane node in \"insufficient-storage-861100\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"488f2445-1d1a-419a-8c59-5368eef03ca1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1710284843-18375 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5372c18e-d884-45e3-a47a-64d967ae6bc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"169208b7-89e0-4c9a-9514-da30d642be08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-861100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-861100 --output=json --layout=cluster: exit status 7 (310.522144ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-861100","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-861100","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:11:27.830918  759699 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-861100" does not appear in /home/jenkins/minikube-integration/18384-542901/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-861100 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-861100 --output=json --layout=cluster: exit status 7 (307.092576ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-861100","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-861100","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:11:28.138321  759750 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-861100" does not appear in /home/jenkins/minikube-integration/18384-542901/kubeconfig
	E0314 19:11:28.148831  759750 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/insufficient-storage-861100/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-861100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-861100
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-861100: (1.716052061s)
--- PASS: TestInsufficientStorage (11.74s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (119.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2101071939 start -p running-upgrade-453619 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0314 19:15:31.106530  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2101071939 start -p running-upgrade-453619 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m22.841730142s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-453619 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0314 19:16:03.972421  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:03.978215  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:03.988459  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:04.008776  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:04.049515  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:04.129836  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:04.290142  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:04.610902  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:05.251855  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:06.532085  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:09.092663  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:14.212819  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-453619 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.911757239s)
helpers_test.go:175: Cleaning up "running-upgrade-453619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-453619
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-453619: (2.240867158s)
--- PASS: TestRunningBinaryUpgrade (119.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (126.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-009804 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0314 19:17:25.895069  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-009804 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m0.717837877s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-009804
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-009804: (2.713396s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-009804 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-009804 status --format={{.Host}}: exit status 7 (190.738913ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-009804 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0314 19:18:47.816887  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-009804 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (32.440547486s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-009804 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-009804 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-009804 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (120.163229ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-009804] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-009804
	    minikube start -p kubernetes-upgrade-009804 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0098042 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-009804 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-009804 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-009804 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (27.058343872s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-009804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-009804
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-009804: (3.27942385s)
--- PASS: TestKubernetesUpgrade (126.70s)

                                                
                                    
x
+
TestMissingContainerUpgrade (113.64s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1773898516 start -p missing-upgrade-615979 --memory=2200 --driver=docker  --container-runtime=docker
E0314 19:16:24.430071  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 19:16:24.453820  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:16:44.934820  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1773898516 start -p missing-upgrade-615979 --memory=2200 --driver=docker  --container-runtime=docker: (38.754453596s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-615979
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-615979: (10.531341368s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-615979
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-615979 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-615979 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m0.604588621s)
helpers_test.go:175: Cleaning up "missing-upgrade-615979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-615979
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-615979: (2.377395508s)
--- PASS: TestMissingContainerUpgrade (113.64s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (92.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2062643166 start -p stopped-upgrade-455682 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2062643166 start -p stopped-upgrade-455682 --memory=2200 --vm-driver=docker  --container-runtime=docker: (46.291465641s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2062643166 -p stopped-upgrade-455682 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2062643166 -p stopped-upgrade-455682 stop: (10.869085377s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-455682 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-455682 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.997360703s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (92.16s)

                                                
                                    
x
+
TestPause/serial/Start (99.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-065010 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0314 19:19:27.474973  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-065010 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m39.454909976s)
--- PASS: TestPause/serial/Start (99.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-455682
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-455682: (1.794257214s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-034186 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-034186 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (115.946806ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-034186] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-542901/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-542901/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-034186 --driver=docker  --container-runtime=docker
E0314 19:20:31.106435  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-034186 --driver=docker  --container-runtime=docker: (41.121029969s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-034186 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-034186 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-034186 --no-kubernetes --driver=docker  --container-runtime=docker: (14.685714943s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-034186 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-034186 status -o json: exit status 2 (341.097119ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-034186","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-034186
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-034186: (1.800367235s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-034186 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-034186 --no-kubernetes --driver=docker  --container-runtime=docker: (9.705423269s)
--- PASS: TestNoKubernetes/serial/Start (9.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-034186 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-034186 "sudo systemctl is-active --quiet service kubelet": exit status 1 (296.382832ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-034186
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-034186: (1.230322756s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-034186 --driver=docker  --container-runtime=docker
E0314 19:21:03.972105  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-034186 --driver=docker  --container-runtime=docker: (7.646991986s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-065010 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-065010 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.137035069s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-034186 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-034186 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.785992ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (93.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0314 19:21:24.430256  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 19:21:31.658205  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m33.044089882s)
--- PASS: TestNetworkPlugins/group/auto/Start (93.04s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-065010 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-065010 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-065010 --output=json --layout=cluster: exit status 2 (572.056324ms)

                                                
                                                
-- stdout --
	{"Name":"pause-065010","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-065010","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.58s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.84s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-065010 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.84s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-065010 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.42s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-065010 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-065010 --alsologtostderr -v=5: (2.415961334s)
--- PASS: TestPause/serial/DeletePaused (2.42s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-065010
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-065010: exit status 1 (25.211573ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-065010: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m9.792290728s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-958609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-958609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wzfcv" [6faaae67-d98f-41d2-9cbb-cc469e3cf73a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wzfcv" [6faaae67-d98f-41d2-9cbb-cc469e3cf73a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003903358s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-958609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xm6k6" [422ffdd0-82c1-4326-898e-fd99dadac207] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004903128s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-958609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-958609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2pdb7" [e780fba4-1903-42b6-92b3-cfee9481249e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2pdb7" [e780fba4-1903-42b6-92b3-cfee9481249e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003714324s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-958609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (97.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m37.786182114s)
--- PASS: TestNetworkPlugins/group/calico/Start (97.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m11.034559231s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-958609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-958609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bhc7r" [604cb88e-28b7-4eba-a966-2d5de30d0dc9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bhc7r" [604cb88e-28b7-4eba-a966-2d5de30d0dc9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004048865s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-j6rrh" [9e4fef34-d04f-461b-8e23-07e190463e8e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005247496s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-958609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-958609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cm8w4" [223d04a4-eef5-4d76-ac8a-0848fcd18502] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cm8w4" [223d04a4-eef5-4d76-ac8a-0848fcd18502] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004450187s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-958609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-958609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (101.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m41.686879978s)
--- PASS: TestNetworkPlugins/group/false/Start (101.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (95.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0314 19:26:03.972174  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:26:24.430447  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m35.645719186s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (95.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-958609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-958609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kz89h" [c2f6fcd4-a31e-41e3-9587-f63612d97f32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kz89h" [c2f6fcd4-a31e-41e3-9587-f63612d97f32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.004127538s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-958609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-958609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-884x9" [5e751e7b-a365-428a-8039-6e2db7a8c2b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-884x9" [5e751e7b-a365-428a-8039-6e2db7a8c2b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004399088s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-958609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-958609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m11.79088402s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (94.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0314 19:28:00.840356  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:00.846570  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:00.856695  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:00.876960  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:00.917217  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:00.998864  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:01.161484  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:01.481974  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:02.122112  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:03.402980  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:05.963117  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:07.051297  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/auto-958609/client.crt: no such file or directory
E0314 19:28:11.083403  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:21.323827  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:28:27.531942  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/auto-958609/client.crt: no such file or directory
E0314 19:28:41.804682  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m34.796747035s)
--- PASS: TestNetworkPlugins/group/bridge/Start (94.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8xv6w" [fd7abdae-08e0-48eb-8bd1-656d9ac9aa83] Running
E0314 19:29:08.492815  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/auto-958609/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003930164s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-958609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-958609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m9qkk" [133bed75-98a6-4d2a-a60a-f53456936b66] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-m9qkk" [133bed75-98a6-4d2a-a60a-f53456936b66] Running
E0314 19:29:22.765383  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005486366s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-958609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-958609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-958609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5tqrj" [8a1facc2-a0d0-48a1-91a5-1717bcbf80ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5tqrj" [8a1facc2-a0d0-48a1-91a5-1717bcbf80ad] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00457771s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-958609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (96.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0314 19:29:57.226494  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:29:57.231800  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:29:57.242053  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:29:57.262324  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:29:57.302604  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:29:57.383240  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:29:57.544041  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:29:57.864472  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:29:58.505437  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:29:58.963418  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:29:58.968670  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:29:58.978918  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:29:59.001545  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:29:59.041711  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:29:59.124636  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:29:59.285551  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:29:59.607468  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:29:59.785776  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:30:00.251602  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:30:01.531958  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:30:02.346872  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:30:04.092270  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:30:07.467084  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:30:09.213091  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-958609 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m36.836551742s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (96.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (150.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-200017 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0314 19:30:14.166207  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 19:30:17.707255  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:30:19.453325  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:30:30.413953  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/auto-958609/client.crt: no such file or directory
E0314 19:30:31.106018  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 19:30:38.188078  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:30:39.934431  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:30:44.685869  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:31:03.972379  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:31:19.148781  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:31:20.894774  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:31:24.430696  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-200017 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m30.242291981s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (150.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-958609 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-958609 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-569wg" [ea90b7d0-4b52-4abc-beb1-a77c042ec1bb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-569wg" [ea90b7d0-4b52-4abc-beb1-a77c042ec1bb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.005092685s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-958609 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-958609 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-869285 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0314 19:32:17.728472  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:17.733697  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:17.743975  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:17.764790  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:17.805462  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:17.886097  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:18.046828  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:18.367623  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:19.008770  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:20.289253  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:21.764597  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:21.769820  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:21.780020  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:21.800231  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:21.840518  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:21.920892  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:22.081204  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:22.402031  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:22.849507  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:23.042265  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:24.322968  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:26.883906  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:27.019332  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:32:27.970671  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:32.004427  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:32:38.211815  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:32:41.069569  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:32:42.245507  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-869285 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (1m6.830330127s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-200017 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E0314 19:32:42.815549  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
helpers_test.go:344: "busybox" [84ca94fa-9615-4d6d-8991-138efdcd2f72] Pending
helpers_test.go:344: "busybox" [84ca94fa-9615-4d6d-8991-138efdcd2f72] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [84ca94fa-9615-4d6d-8991-138efdcd2f72] Running
E0314 19:32:46.565036  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/auto-958609/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003915262s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-200017 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-200017 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-200017 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.350989864s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-200017 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-200017 --alsologtostderr -v=3
E0314 19:32:58.692032  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:33:00.840510  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:33:02.725877  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-200017 --alsologtostderr -v=3: (11.326185126s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-869285 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [90195644-31b9-495d-b9c3-d9288bb9c15b] Pending
helpers_test.go:344: "busybox" [90195644-31b9-495d-b9c3-d9288bb9c15b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [90195644-31b9-495d-b9c3-d9288bb9c15b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004554795s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-869285 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-200017 -n old-k8s-version-200017
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-200017 -n old-k8s-version-200017: exit status 7 (114.832838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-200017 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (376.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-200017 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-200017 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (6m15.944206846s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-200017 -n old-k8s-version-200017
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (376.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-869285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0314 19:33:14.254574  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/auto-958609/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-869285 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.543089957s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-869285 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-869285 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-869285 --alsologtostderr -v=3: (11.286259266s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-869285 -n no-preload-869285
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-869285 -n no-preload-869285: exit status 7 (138.753798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-869285 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-869285 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0314 19:33:28.526033  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:33:39.652249  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:33:43.686962  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:34:06.415678  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:06.421054  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:06.431400  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:06.451728  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:06.491978  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:06.572305  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:06.732681  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:07.053692  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:07.694613  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:08.974910  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:11.535203  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:16.655670  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:26.896352  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:34.706827  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:34.712064  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:34.722341  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:34.742621  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:34.782890  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:34.863245  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:35.023662  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:35.344148  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:35.984825  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:37.265342  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:39.825616  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:44.946524  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:47.377168  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:34:55.187149  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:34:57.226863  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:34:58.963793  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:35:01.572965  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:35:05.608075  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:35:15.667660  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:35:24.909714  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:35:26.655734  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
E0314 19:35:28.338144  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:35:31.106328  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
E0314 19:35:56.628088  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:36:03.971687  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:36:07.475576  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 19:36:24.429926  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 19:36:25.235526  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:25.240837  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:25.251089  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:25.271352  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:25.311634  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:25.391950  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:25.552376  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:25.872633  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:26.512993  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:27.793924  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:30.354136  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:35.474805  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:45.715610  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:36:50.258667  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:37:06.195796  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:37:17.727953  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:37:18.548465  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:37:21.764888  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:37:45.413814  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:37:46.564427  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/auto-958609/client.crt: no such file or directory
E0314 19:37:47.156247  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:37:49.448464  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-869285 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (4m26.473596699s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-869285 -n no-preload-869285
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lvp5r" [0a472190-197a-4351-ac4c-271318b4bb67] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004419803s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-lvp5r" [0a472190-197a-4351-ac4c-271318b4bb67] Running
E0314 19:38:00.840053  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00450215s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-869285 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-869285 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-869285 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-869285 -n no-preload-869285
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-869285 -n no-preload-869285: exit status 2 (329.765045ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-869285 -n no-preload-869285
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-869285 -n no-preload-869285: exit status 2 (329.106751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-869285 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-869285 -n no-preload-869285
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-869285 -n no-preload-869285
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (87.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-662127 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0314 19:39:06.415309  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:39:09.076787  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-662127 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (1m27.240180944s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (87.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-27wrm" [5ac7c242-2af9-4fc6-9544-f891dac12997] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006011137s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-27wrm" [5ac7c242-2af9-4fc6-9544-f891dac12997] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00388496s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-200017 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-200017 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-200017 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-200017 -n old-k8s-version-200017
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-200017 -n old-k8s-version-200017: exit status 2 (357.758544ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-200017 -n old-k8s-version-200017
E0314 19:39:34.099682  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-200017 -n old-k8s-version-200017: exit status 2 (350.85182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-200017 --alsologtostderr -v=1
E0314 19:39:34.706484  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-200017 -n old-k8s-version-200017
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-200017 -n old-k8s-version-200017
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-662127 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7300056f-e673-4220-946a-c31ff2e4379a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7300056f-e673-4220-946a-c31ff2e4379a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.006259358s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-662127 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-556651 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-556651 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (54.393052889s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-662127 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-662127 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-662127 --alsologtostderr -v=3
E0314 19:39:57.227166  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-662127 --alsologtostderr -v=3: (11.026775908s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-662127 -n embed-certs-662127
E0314 19:39:58.965700  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-662127 -n embed-certs-662127: exit status 7 (118.972213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-662127 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (272.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-662127 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0314 19:40:02.389327  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
E0314 19:40:31.105970  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-662127 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (4m31.944467034s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-662127 -n embed-certs-662127
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (272.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-556651 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5f9c22b4-e767-4781-b914-d11c307d8402] Pending
helpers_test.go:344: "busybox" [5f9c22b4-e767-4781-b914-d11c307d8402] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5f9c22b4-e767-4781-b914-d11c307d8402] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004090257s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-556651 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-556651 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-556651 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.08968202s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-556651 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-556651 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-556651 --alsologtostderr -v=3: (10.787912505s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-556651 -n default-k8s-diff-port-556651
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-556651 -n default-k8s-diff-port-556651: exit status 7 (89.967611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-556651 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-556651 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0314 19:41:03.972551  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
E0314 19:41:24.430431  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/functional-455177/client.crt: no such file or directory
E0314 19:41:25.235823  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:41:52.917378  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kubenet-958609/client.crt: no such file or directory
E0314 19:42:17.728280  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/false-958609/client.crt: no such file or directory
E0314 19:42:21.764386  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/enable-default-cni-958609/client.crt: no such file or directory
E0314 19:42:42.808216  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:42:42.813555  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:42:42.823821  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:42:42.844113  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:42:42.884452  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:42:42.964786  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:42:43.125082  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:42:43.445351  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:42:44.085874  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:42:45.367180  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:42:46.564249  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/auto-958609/client.crt: no such file or directory
E0314 19:42:47.928055  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:42:53.048428  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:43:00.840824  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:43:03.289348  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:43:04.312742  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:04.317943  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:04.328181  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:04.348430  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:04.388758  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:04.469180  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:04.630115  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:04.950918  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:05.591969  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:06.872768  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:09.433906  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:14.554319  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:23.769986  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:43:24.794985  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:43:45.275957  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
E0314 19:44:04.730842  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:44:06.415472  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/flannel-958609/client.crt: no such file or directory
E0314 19:44:09.614840  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/auto-958609/client.crt: no such file or directory
E0314 19:44:23.887001  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/kindnet-958609/client.crt: no such file or directory
E0314 19:44:26.236501  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-556651 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (4m29.877478993s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-556651 -n default-k8s-diff-port-556651
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2p8hl" [216b1022-6ecd-45ae-a2a0-1d115299bbcc] Running
E0314 19:44:34.706914  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/bridge-958609/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003714875s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2p8hl" [216b1022-6ecd-45ae-a2a0-1d115299bbcc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004165997s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-662127 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-662127 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-662127 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-662127 -n embed-certs-662127
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-662127 -n embed-certs-662127: exit status 2 (351.309737ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-662127 -n embed-certs-662127
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-662127 -n embed-certs-662127: exit status 2 (363.774628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-662127 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-662127 -n embed-certs-662127
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-662127 -n embed-certs-662127
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-484951 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0314 19:44:57.227016  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/custom-flannel-958609/client.crt: no such file or directory
E0314 19:44:58.963559  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/calico-958609/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-484951 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (46.718988396s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-c7j6g" [d94cdc61-0886-44b1-95e8-98c02d091d9d] Running
E0314 19:45:26.651668  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/old-k8s-version-200017/client.crt: no such file or directory
E0314 19:45:31.106167  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/addons-511560/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004164606s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-c7j6g" [d94cdc61-0886-44b1-95e8-98c02d091d9d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004313889s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-556651 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-484951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-484951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.591232718s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-556651 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-556651 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-556651 --alsologtostderr -v=1: (1.041262954s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-556651 -n default-k8s-diff-port-556651
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-556651 -n default-k8s-diff-port-556651: exit status 2 (433.703049ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-556651 -n default-k8s-diff-port-556651
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-556651 -n default-k8s-diff-port-556651: exit status 2 (486.289718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-556651 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-556651 -n default-k8s-diff-port-556651
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-556651 -n default-k8s-diff-port-556651
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-484951 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-484951 --alsologtostderr -v=3: (6.07395446s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (6.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-484951 -n newest-cni-484951
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-484951 -n newest-cni-484951: exit status 7 (114.449419ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-484951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-484951 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0314 19:45:48.157686  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/no-preload-869285/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-484951 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (17.439434535s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-484951 -n newest-cni-484951
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-484951 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-484951 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-484951 -n newest-cni-484951
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-484951 -n newest-cni-484951: exit status 2 (334.940592ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-484951 -n newest-cni-484951
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-484951 -n newest-cni-484951: exit status 2 (316.475056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-484951 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-484951 -n newest-cni-484951
E0314 19:46:03.971899  548309 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-542901/.minikube/profiles/skaffold-505499/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-484951 -n newest-cni-484951
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.82s)

                                                
                                    

Test skip (27/350)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-976617 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-976617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-976617
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-958609 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-958609" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-958609

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-958609" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-958609"

                                                
                                                
----------------------- debugLogs end: cilium-958609 [took: 6.162622951s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-958609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-958609
--- SKIP: TestNetworkPlugins/group/cilium (6.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-531774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-531774
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard