Test Report: Docker_Linux_docker_arm64 18165

                    
                      21e5735d41df0fbfa8402e4459b7fe72f1b19e7e:2024-02-14:33133
                    
                

Test fail (3/335)

Order failed test Duration
39 TestAddons/parallel/Ingress 37.23
179 TestIngressAddonLegacy/serial/ValidateIngressAddons 57.85
247 TestScheduledStopUnix 34.58
x
+
TestAddons/parallel/Ingress (37.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-565438 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-565438 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-565438 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4a38bc26-6216-4824-9502-5e0dfc9282c5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4a38bc26-6216-4824-9502-5e0dfc9282c5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00774555s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-565438 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.058221683s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p addons-565438 addons disable ingress-dns --alsologtostderr -v=1: (1.144879046s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p addons-565438 addons disable ingress --alsologtostderr -v=1: (7.823549414s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-565438
helpers_test.go:235: (dbg) docker inspect addons-565438:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d81c0e170ee18d6244b0d4014064543fd5d8ada63893fb2ed9f89ff52cd3d07c",
	        "Created": "2024-02-14T02:58:49.61288595Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1272665,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T02:58:49.92188831Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/d81c0e170ee18d6244b0d4014064543fd5d8ada63893fb2ed9f89ff52cd3d07c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d81c0e170ee18d6244b0d4014064543fd5d8ada63893fb2ed9f89ff52cd3d07c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d81c0e170ee18d6244b0d4014064543fd5d8ada63893fb2ed9f89ff52cd3d07c/hosts",
	        "LogPath": "/var/lib/docker/containers/d81c0e170ee18d6244b0d4014064543fd5d8ada63893fb2ed9f89ff52cd3d07c/d81c0e170ee18d6244b0d4014064543fd5d8ada63893fb2ed9f89ff52cd3d07c-json.log",
	        "Name": "/addons-565438",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-565438:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-565438",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ccce3a500798d9c058f16ddf72c456f0191b7c4d8a4a9fff4e4068a72a7af81f-init/diff:/var/lib/docker/overlay2/5910aa9960042d82258ed2c744f886c75b60e8845789b5b8e9c74bac81b955ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ccce3a500798d9c058f16ddf72c456f0191b7c4d8a4a9fff4e4068a72a7af81f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ccce3a500798d9c058f16ddf72c456f0191b7c4d8a4a9fff4e4068a72a7af81f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ccce3a500798d9c058f16ddf72c456f0191b7c4d8a4a9fff4e4068a72a7af81f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-565438",
	                "Source": "/var/lib/docker/volumes/addons-565438/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-565438",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-565438",
	                "name.minikube.sigs.k8s.io": "addons-565438",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b4a65ce408cca8a82652d847333f7879674bc516f4e2ad4f6e5812c667dd3330",
	            "SandboxKey": "/var/run/docker/netns/b4a65ce408cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34054"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34053"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34050"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34052"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-565438": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d81c0e170ee1",
	                        "addons-565438"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "095875c5ddf4a3902332beda2e4d8f24a5feabea8e5c116dbe734556611541d0",
	                    "EndpointID": "bbf6a646ecb7fd6b51cbcb4bc99eebce1d98a1a5e86e51ac5ca4c1f3658ad58f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "addons-565438",
	                        "d81c0e170ee1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-565438 -n addons-565438
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-565438 logs -n 25: (1.148502989s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-110193                                                                     | download-only-110193   | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	| delete  | -p download-only-704261                                                                     | download-only-704261   | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	| delete  | -p download-only-131058                                                                     | download-only-131058   | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	| delete  | -p download-only-110193                                                                     | download-only-110193   | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	| start   | --download-only -p                                                                          | download-docker-763270 | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC |                     |
	|         | download-docker-763270                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-763270                                                                   | download-docker-763270 | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-981860   | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC |                     |
	|         | binary-mirror-981860                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:44401                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-981860                                                                     | binary-mirror-981860   | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	| addons  | enable dashboard -p                                                                         | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC |                     |
	|         | addons-565438                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC |                     |
	|         | addons-565438                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-565438 --wait=true                                                                | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 03:00 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --driver=docker                                                               |                        |         |         |                     |                     |
	|         |  --container-runtime=docker                                                                 |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:00 UTC | 14 Feb 24 03:00 UTC |
	|         | -p addons-565438                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-565438 ip                                                                            | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	| addons  | addons-565438 addons disable                                                                | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | -p addons-565438                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | addons-565438                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-565438 ssh cat                                                                       | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | /opt/local-path-provisioner/pvc-a0ae79f4-863f-4a31-aca1-22c767dfc58a_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-565438 addons disable                                                                | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | addons-565438                                                                               |                        |         |         |                     |                     |
	| addons  | addons-565438 addons                                                                        | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-565438 ssh curl -s                                                                   | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-565438 ip                                                                            | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:01 UTC |
	| addons  | addons-565438 addons disable                                                                | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:01 UTC | 14 Feb 24 03:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-565438 addons disable                                                                | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:02 UTC | 14 Feb 24 03:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-565438 addons                                                                        | addons-565438          | jenkins | v1.32.0 | 14 Feb 24 03:02 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 02:58:26
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 02:58:26.061929 1272199 out.go:291] Setting OutFile to fd 1 ...
	I0214 02:58:26.062136 1272199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:58:26.062164 1272199 out.go:304] Setting ErrFile to fd 2...
	I0214 02:58:26.062185 1272199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:58:26.062454 1272199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
	I0214 02:58:26.062924 1272199 out.go:298] Setting JSON to false
	I0214 02:58:26.063852 1272199 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20451,"bootTime":1707859055,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 02:58:26.063961 1272199 start.go:138] virtualization:  
	I0214 02:58:26.066631 1272199 out.go:177] * [addons-565438] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 02:58:26.069338 1272199 out.go:177]   - MINIKUBE_LOCATION=18165
	I0214 02:58:26.071466 1272199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 02:58:26.069515 1272199 notify.go:220] Checking for updates...
	I0214 02:58:26.074718 1272199 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	I0214 02:58:26.078002 1272199 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	I0214 02:58:26.079924 1272199 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 02:58:26.082256 1272199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 02:58:26.084793 1272199 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 02:58:26.105425 1272199 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 02:58:26.105546 1272199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:58:26.180353 1272199 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:58:26.170770105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:58:26.180468 1272199 docker.go:295] overlay module found
	I0214 02:58:26.182758 1272199 out.go:177] * Using the docker driver based on user configuration
	I0214 02:58:26.184554 1272199 start.go:298] selected driver: docker
	I0214 02:58:26.184572 1272199 start.go:902] validating driver "docker" against <nil>
	I0214 02:58:26.184585 1272199 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 02:58:26.185218 1272199 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:58:26.245496 1272199 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:58:26.235093663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:58:26.245674 1272199 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 02:58:26.245907 1272199 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 02:58:26.247972 1272199 out.go:177] * Using Docker driver with root privileges
	I0214 02:58:26.250050 1272199 cni.go:84] Creating CNI manager for ""
	I0214 02:58:26.250082 1272199 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0214 02:58:26.250094 1272199 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 02:58:26.250114 1272199 start_flags.go:321] config:
	{Name:addons-565438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-565438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:58:26.252263 1272199 out.go:177] * Starting control plane node addons-565438 in cluster addons-565438
	I0214 02:58:26.254075 1272199 cache.go:121] Beginning downloading kic base image for docker with docker
	I0214 02:58:26.255954 1272199 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 02:58:26.257640 1272199 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0214 02:58:26.257698 1272199 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0214 02:58:26.257723 1272199 cache.go:56] Caching tarball of preloaded images
	I0214 02:58:26.257733 1272199 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 02:58:26.257828 1272199 preload.go:174] Found /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0214 02:58:26.257839 1272199 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0214 02:58:26.258191 1272199 profile.go:148] Saving config to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/config.json ...
	I0214 02:58:26.258220 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/config.json: {Name:mk0a9986bb4ed1b679c5bf3e8bf61ed362080fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:26.272441 1272199 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 02:58:26.272579 1272199 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 02:58:26.272599 1272199 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0214 02:58:26.272604 1272199 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0214 02:58:26.272612 1272199 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 02:58:26.272617 1272199 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from local cache
	I0214 02:58:42.018813 1272199 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 from cached tarball
	I0214 02:58:42.018879 1272199 cache.go:194] Successfully downloaded all kic artifacts
	I0214 02:58:42.018927 1272199 start.go:365] acquiring machines lock for addons-565438: {Name:mke4d580d4c8bc8cc0ec4e80f7286f9ecd4221f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 02:58:42.019812 1272199 start.go:369] acquired machines lock for "addons-565438" in 847.823µs
	I0214 02:58:42.019870 1272199 start.go:93] Provisioning new machine with config: &{Name:addons-565438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-565438 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0214 02:58:42.019968 1272199 start.go:125] createHost starting for "" (driver="docker")
	I0214 02:58:42.022656 1272199 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0214 02:58:42.023010 1272199 start.go:159] libmachine.API.Create for "addons-565438" (driver="docker")
	I0214 02:58:42.023078 1272199 client.go:168] LocalClient.Create starting
	I0214 02:58:42.023219 1272199 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem
	I0214 02:58:42.851245 1272199 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem
	I0214 02:58:43.217187 1272199 cli_runner.go:164] Run: docker network inspect addons-565438 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0214 02:58:43.233370 1272199 cli_runner.go:211] docker network inspect addons-565438 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0214 02:58:43.233454 1272199 network_create.go:281] running [docker network inspect addons-565438] to gather additional debugging logs...
	I0214 02:58:43.233485 1272199 cli_runner.go:164] Run: docker network inspect addons-565438
	W0214 02:58:43.247512 1272199 cli_runner.go:211] docker network inspect addons-565438 returned with exit code 1
	I0214 02:58:43.247547 1272199 network_create.go:284] error running [docker network inspect addons-565438]: docker network inspect addons-565438: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-565438 not found
	I0214 02:58:43.247560 1272199 network_create.go:286] output of [docker network inspect addons-565438]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-565438 not found
	
	** /stderr **
	I0214 02:58:43.247779 1272199 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 02:58:43.262252 1272199 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40025b5d10}
	I0214 02:58:43.262289 1272199 network_create.go:124] attempt to create docker network addons-565438 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0214 02:58:43.262350 1272199 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-565438 addons-565438
	I0214 02:58:43.323570 1272199 network_create.go:108] docker network addons-565438 192.168.49.0/24 created
	I0214 02:58:43.323606 1272199 kic.go:121] calculated static IP "192.168.49.2" for the "addons-565438" container
	I0214 02:58:43.323708 1272199 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0214 02:58:43.337950 1272199 cli_runner.go:164] Run: docker volume create addons-565438 --label name.minikube.sigs.k8s.io=addons-565438 --label created_by.minikube.sigs.k8s.io=true
	I0214 02:58:43.354741 1272199 oci.go:103] Successfully created a docker volume addons-565438
	I0214 02:58:43.354837 1272199 cli_runner.go:164] Run: docker run --rm --name addons-565438-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-565438 --entrypoint /usr/bin/test -v addons-565438:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0214 02:58:45.588441 1272199 cli_runner.go:217] Completed: docker run --rm --name addons-565438-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-565438 --entrypoint /usr/bin/test -v addons-565438:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (2.233551761s)
	I0214 02:58:45.588470 1272199 oci.go:107] Successfully prepared a docker volume addons-565438
	I0214 02:58:45.588509 1272199 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0214 02:58:45.588533 1272199 kic.go:194] Starting extracting preloaded images to volume ...
	I0214 02:58:45.588617 1272199 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-565438:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0214 02:58:49.537984 1272199 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-565438:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.949323839s)
	I0214 02:58:49.538017 1272199 kic.go:203] duration metric: took 3.949481 seconds to extract preloaded images to volume
	W0214 02:58:49.538195 1272199 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0214 02:58:49.538318 1272199 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0214 02:58:49.599258 1272199 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-565438 --name addons-565438 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-565438 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-565438 --network addons-565438 --ip 192.168.49.2 --volume addons-565438:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0214 02:58:49.932549 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Running}}
	I0214 02:58:49.962884 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:58:49.985991 1272199 cli_runner.go:164] Run: docker exec addons-565438 stat /var/lib/dpkg/alternatives/iptables
	I0214 02:58:50.059042 1272199 oci.go:144] the created container "addons-565438" has a running status.
	I0214 02:58:50.059071 1272199 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa...
	I0214 02:58:50.629715 1272199 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0214 02:58:50.659061 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:58:50.696037 1272199 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0214 02:58:50.696070 1272199 kic_runner.go:114] Args: [docker exec --privileged addons-565438 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0214 02:58:50.788733 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:58:50.805650 1272199 machine.go:88] provisioning docker machine ...
	I0214 02:58:50.805682 1272199 ubuntu.go:169] provisioning hostname "addons-565438"
	I0214 02:58:50.805744 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:58:50.827386 1272199 main.go:141] libmachine: Using SSH client type: native
	I0214 02:58:50.827899 1272199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34054 <nil> <nil>}
	I0214 02:58:50.827917 1272199 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-565438 && echo "addons-565438" | sudo tee /etc/hostname
	I0214 02:58:50.989937 1272199 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-565438
	
	I0214 02:58:50.990016 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:58:51.011352 1272199 main.go:141] libmachine: Using SSH client type: native
	I0214 02:58:51.011819 1272199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34054 <nil> <nil>}
	I0214 02:58:51.011849 1272199 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-565438' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-565438/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-565438' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 02:58:51.148066 1272199 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 02:58:51.148102 1272199 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18165-1266022/.minikube CaCertPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18165-1266022/.minikube}
	I0214 02:58:51.148132 1272199 ubuntu.go:177] setting up certificates
	I0214 02:58:51.148145 1272199 provision.go:83] configureAuth start
	I0214 02:58:51.148213 1272199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-565438
	I0214 02:58:51.167355 1272199 provision.go:138] copyHostCerts
	I0214 02:58:51.167445 1272199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.pem (1078 bytes)
	I0214 02:58:51.167579 1272199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18165-1266022/.minikube/cert.pem (1123 bytes)
	I0214 02:58:51.167646 1272199 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18165-1266022/.minikube/key.pem (1679 bytes)
	I0214 02:58:51.167723 1272199 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca-key.pem org=jenkins.addons-565438 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-565438]
	I0214 02:58:51.527020 1272199 provision.go:172] copyRemoteCerts
	I0214 02:58:51.527086 1272199 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 02:58:51.527134 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:58:51.546071 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:58:51.644601 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0214 02:58:51.668508 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0214 02:58:51.692837 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 02:58:51.716436 1272199 provision.go:86] duration metric: configureAuth took 568.27032ms
	I0214 02:58:51.716466 1272199 ubuntu.go:193] setting minikube options for container-runtime
	I0214 02:58:51.716655 1272199 config.go:182] Loaded profile config "addons-565438": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0214 02:58:51.716715 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:58:51.732265 1272199 main.go:141] libmachine: Using SSH client type: native
	I0214 02:58:51.732675 1272199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34054 <nil> <nil>}
	I0214 02:58:51.732689 1272199 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0214 02:58:51.863982 1272199 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0214 02:58:51.864006 1272199 ubuntu.go:71] root file system type: overlay
	I0214 02:58:51.864166 1272199 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0214 02:58:51.864235 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:58:51.880145 1272199 main.go:141] libmachine: Using SSH client type: native
	I0214 02:58:51.880602 1272199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34054 <nil> <nil>}
	I0214 02:58:51.880697 1272199 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0214 02:58:52.025248 1272199 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0214 02:58:52.025347 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:58:52.042175 1272199 main.go:141] libmachine: Using SSH client type: native
	I0214 02:58:52.042598 1272199 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34054 <nil> <nil>}
	I0214 02:58:52.042621 1272199 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0214 02:58:52.794837 1272199 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-14 02:58:52.018276902 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0214 02:58:52.794902 1272199 machine.go:91] provisioned docker machine in 1.989230584s
	I0214 02:58:52.794913 1272199 client.go:171] LocalClient.Create took 10.771824242s
	I0214 02:58:52.794949 1272199 start.go:167] duration metric: libmachine.API.Create for "addons-565438" took 10.771941818s
	I0214 02:58:52.794963 1272199 start.go:300] post-start starting for "addons-565438" (driver="docker")
	I0214 02:58:52.794974 1272199 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 02:58:52.795065 1272199 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 02:58:52.795139 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:58:52.811067 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:58:52.904710 1272199 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 02:58:52.907503 1272199 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 02:58:52.907538 1272199 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 02:58:52.907550 1272199 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 02:58:52.907565 1272199 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 02:58:52.907576 1272199 filesync.go:126] Scanning /home/jenkins/minikube-integration/18165-1266022/.minikube/addons for local assets ...
	I0214 02:58:52.907644 1272199 filesync.go:126] Scanning /home/jenkins/minikube-integration/18165-1266022/.minikube/files for local assets ...
	I0214 02:58:52.907696 1272199 start.go:303] post-start completed in 112.726873ms
	I0214 02:58:52.907995 1272199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-565438
	I0214 02:58:52.925551 1272199 profile.go:148] Saving config to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/config.json ...
	I0214 02:58:52.925839 1272199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 02:58:52.925888 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:58:52.940674 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:58:53.032469 1272199 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 02:58:53.037116 1272199 start.go:128] duration metric: createHost completed in 11.017132181s
	I0214 02:58:53.037142 1272199 start.go:83] releasing machines lock for "addons-565438", held for 11.017302564s
	I0214 02:58:53.037226 1272199 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-565438
	I0214 02:58:53.052824 1272199 ssh_runner.go:195] Run: cat /version.json
	I0214 02:58:53.052884 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:58:53.052889 1272199 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 02:58:53.052948 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:58:53.076348 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:58:53.091721 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:58:53.167271 1272199 ssh_runner.go:195] Run: systemctl --version
	I0214 02:58:53.300534 1272199 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 02:58:53.304698 1272199 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0214 02:58:53.329663 1272199 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0214 02:58:53.329808 1272199 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 02:58:53.360089 1272199 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0214 02:58:53.360157 1272199 start.go:475] detecting cgroup driver to use...
	I0214 02:58:53.360194 1272199 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 02:58:53.360292 1272199 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 02:58:53.376019 1272199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0214 02:58:53.385502 1272199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0214 02:58:53.395507 1272199 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0214 02:58:53.395580 1272199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0214 02:58:53.405282 1272199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 02:58:53.414747 1272199 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0214 02:58:53.424647 1272199 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 02:58:53.434115 1272199 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 02:58:53.443095 1272199 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0214 02:58:53.452878 1272199 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 02:58:53.461314 1272199 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 02:58:53.469817 1272199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 02:58:53.556454 1272199 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0214 02:58:53.670369 1272199 start.go:475] detecting cgroup driver to use...
	I0214 02:58:53.670442 1272199 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 02:58:53.670519 1272199 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0214 02:58:53.684932 1272199 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0214 02:58:53.685046 1272199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0214 02:58:53.700478 1272199 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 02:58:53.717548 1272199 ssh_runner.go:195] Run: which cri-dockerd
	I0214 02:58:53.721450 1272199 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0214 02:58:53.734761 1272199 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0214 02:58:53.754897 1272199 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0214 02:58:53.857762 1272199 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0214 02:58:53.958331 1272199 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0214 02:58:53.958509 1272199 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0214 02:58:53.980017 1272199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 02:58:54.081843 1272199 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0214 02:58:54.320277 1272199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0214 02:58:54.332133 1272199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0214 02:58:54.344615 1272199 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0214 02:58:54.435894 1272199 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0214 02:58:54.523648 1272199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 02:58:54.614948 1272199 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0214 02:58:54.629165 1272199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0214 02:58:54.640139 1272199 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 02:58:54.730623 1272199 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0214 02:58:54.798237 1272199 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0214 02:58:54.798377 1272199 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0214 02:58:54.803932 1272199 start.go:543] Will wait 60s for crictl version
	I0214 02:58:54.804044 1272199 ssh_runner.go:195] Run: which crictl
	I0214 02:58:54.809631 1272199 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 02:58:54.859053 1272199 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0214 02:58:54.859164 1272199 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0214 02:58:54.880558 1272199 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0214 02:58:54.906201 1272199 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0214 02:58:54.906299 1272199 cli_runner.go:164] Run: docker network inspect addons-565438 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 02:58:54.922219 1272199 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0214 02:58:54.925755 1272199 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 02:58:54.936131 1272199 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0214 02:58:54.936201 1272199 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0214 02:58:54.953897 1272199 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0214 02:58:54.953919 1272199 docker.go:615] Images already preloaded, skipping extraction
	I0214 02:58:54.953987 1272199 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0214 02:58:54.971208 1272199 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0214 02:58:54.971229 1272199 cache_images.go:84] Images are preloaded, skipping loading
	I0214 02:58:54.971296 1272199 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0214 02:58:55.032666 1272199 cni.go:84] Creating CNI manager for ""
	I0214 02:58:55.032697 1272199 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0214 02:58:55.032720 1272199 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 02:58:55.032740 1272199 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-565438 NodeName:addons-565438 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 02:58:55.032895 1272199 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-565438"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 02:58:55.032963 1272199 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-565438 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-565438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0214 02:58:55.033041 1272199 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0214 02:58:55.044374 1272199 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 02:58:55.044459 1272199 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 02:58:55.054027 1272199 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0214 02:58:55.072917 1272199 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 02:58:55.092952 1272199 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0214 02:58:55.114137 1272199 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0214 02:58:55.117974 1272199 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 02:58:55.129007 1272199 certs.go:56] Setting up /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438 for IP: 192.168.49.2
	I0214 02:58:55.129040 1272199 certs.go:190] acquiring lock for shared ca certs: {Name:mk38eec77f10b2e9943b70dec5fadf9f48ce78cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:55.129618 1272199 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.key
	I0214 02:58:55.383226 1272199 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt ...
	I0214 02:58:55.383266 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt: {Name:mkaec9113d672519b20cd2911cf9f3edc4b81b4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:55.383838 1272199 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.key ...
	I0214 02:58:55.383856 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.key: {Name:mkc170ce03e56caef61d2f51943202926eb344f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:55.383955 1272199 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.key
	I0214 02:58:55.676576 1272199 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.crt ...
	I0214 02:58:55.676607 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.crt: {Name:mkec4b1526ffbc2e94d3fb9451cc6591f5915e41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:55.676790 1272199 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.key ...
	I0214 02:58:55.676801 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.key: {Name:mkab8f99a8a88287044fb591cafce6de96d2f7f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:55.676924 1272199 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.key
	I0214 02:58:55.676940 1272199 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt with IP's: []
	I0214 02:58:56.576515 1272199 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt ...
	I0214 02:58:56.576549 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: {Name:mk72ae665db7394132d390a66ef1015bb7a62207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:56.576743 1272199 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.key ...
	I0214 02:58:56.576755 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.key: {Name:mkb88699bbc867eb692507179f1e4eaf07c2933d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:56.576841 1272199 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.key.dd3b5fb2
	I0214 02:58:56.576865 1272199 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0214 02:58:56.862707 1272199 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.crt.dd3b5fb2 ...
	I0214 02:58:56.862737 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.crt.dd3b5fb2: {Name:mk52448319e89de2502dca784bc47f3c6531acbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:56.863400 1272199 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.key.dd3b5fb2 ...
	I0214 02:58:56.863417 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.key.dd3b5fb2: {Name:mk4b74d78a0a61d01328f3eacd8516e17675a8ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:56.863959 1272199 certs.go:337] copying /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.crt
	I0214 02:58:56.864038 1272199 certs.go:341] copying /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.key
	I0214 02:58:56.864097 1272199 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/proxy-client.key
	I0214 02:58:56.864117 1272199 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/proxy-client.crt with IP's: []
	I0214 02:58:57.075235 1272199 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/proxy-client.crt ...
	I0214 02:58:57.075264 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/proxy-client.crt: {Name:mke5dcfc8ab66ff8baefef4b8b2aaa2e21ac4bb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:57.075454 1272199 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/proxy-client.key ...
	I0214 02:58:57.075467 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/proxy-client.key: {Name:mk56bccedf8f56589df8d7b994ada0c98d05353f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:57.076053 1272199 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 02:58:57.076100 1272199 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem (1078 bytes)
	I0214 02:58:57.076126 1272199 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem (1123 bytes)
	I0214 02:58:57.076150 1272199 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/key.pem (1679 bytes)
	I0214 02:58:57.076763 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 02:58:57.102926 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 02:58:57.128822 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 02:58:57.154302 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 02:58:57.178344 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 02:58:57.202271 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 02:58:57.225781 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 02:58:57.250754 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 02:58:57.274526 1272199 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 02:58:57.299204 1272199 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 02:58:57.318276 1272199 ssh_runner.go:195] Run: openssl version
	I0214 02:58:57.323861 1272199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 02:58:57.333094 1272199 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 02:58:57.336507 1272199 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:58 /usr/share/ca-certificates/minikubeCA.pem
	I0214 02:58:57.336573 1272199 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 02:58:57.343397 1272199 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 02:58:57.353031 1272199 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 02:58:57.356398 1272199 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0214 02:58:57.356446 1272199 kubeadm.go:404] StartCluster: {Name:addons-565438 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-565438 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:58:57.356571 1272199 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0214 02:58:57.372442 1272199 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 02:58:57.381845 1272199 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 02:58:57.390765 1272199 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0214 02:58:57.390840 1272199 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 02:58:57.399691 1272199 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 02:58:57.399777 1272199 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0214 02:58:57.444443 1272199 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0214 02:58:57.444562 1272199 kubeadm.go:322] [preflight] Running pre-flight checks
	I0214 02:58:57.496324 1272199 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0214 02:58:57.496397 1272199 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0214 02:58:57.496437 1272199 kubeadm.go:322] OS: Linux
	I0214 02:58:57.496488 1272199 kubeadm.go:322] CGROUPS_CPU: enabled
	I0214 02:58:57.496539 1272199 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0214 02:58:57.496587 1272199 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0214 02:58:57.496636 1272199 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0214 02:58:57.496685 1272199 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0214 02:58:57.496734 1272199 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0214 02:58:57.496778 1272199 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0214 02:58:57.496826 1272199 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0214 02:58:57.496873 1272199 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0214 02:58:57.565169 1272199 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 02:58:57.565365 1272199 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 02:58:57.565508 1272199 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 02:58:57.866940 1272199 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 02:58:57.871709 1272199 out.go:204]   - Generating certificates and keys ...
	I0214 02:58:57.871903 1272199 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0214 02:58:57.871997 1272199 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0214 02:58:58.394428 1272199 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 02:58:58.798330 1272199 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0214 02:58:59.094773 1272199 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0214 02:58:59.493484 1272199 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0214 02:58:59.790562 1272199 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0214 02:58:59.790801 1272199 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-565438 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 02:58:59.912994 1272199 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0214 02:58:59.913122 1272199 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-565438 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 02:59:00.681215 1272199 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 02:59:01.217835 1272199 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 02:59:01.584357 1272199 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0214 02:59:01.584826 1272199 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 02:59:02.211738 1272199 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 02:59:02.513406 1272199 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 02:59:02.821408 1272199 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 02:59:03.645793 1272199 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 02:59:03.646628 1272199 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 02:59:03.649479 1272199 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 02:59:03.652053 1272199 out.go:204]   - Booting up control plane ...
	I0214 02:59:03.652155 1272199 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 02:59:03.652228 1272199 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 02:59:03.652792 1272199 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 02:59:03.666262 1272199 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 02:59:03.667259 1272199 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 02:59:03.667308 1272199 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0214 02:59:03.765636 1272199 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 02:59:11.274541 1272199 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.508208 seconds
	I0214 02:59:11.274680 1272199 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 02:59:11.293120 1272199 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 02:59:11.822658 1272199 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 02:59:11.822852 1272199 kubeadm.go:322] [mark-control-plane] Marking the node addons-565438 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 02:59:12.333346 1272199 kubeadm.go:322] [bootstrap-token] Using token: kujr48.fcyskmfun45nfb5p
	I0214 02:59:12.335756 1272199 out.go:204]   - Configuring RBAC rules ...
	I0214 02:59:12.335878 1272199 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 02:59:12.340794 1272199 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 02:59:12.353136 1272199 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 02:59:12.358377 1272199 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 02:59:12.363944 1272199 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 02:59:12.368219 1272199 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 02:59:12.381703 1272199 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 02:59:12.632582 1272199 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0214 02:59:12.753287 1272199 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0214 02:59:12.755692 1272199 kubeadm.go:322] 
	I0214 02:59:12.755771 1272199 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0214 02:59:12.755782 1272199 kubeadm.go:322] 
	I0214 02:59:12.755855 1272199 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0214 02:59:12.755865 1272199 kubeadm.go:322] 
	I0214 02:59:12.755890 1272199 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0214 02:59:12.756330 1272199 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 02:59:12.756398 1272199 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 02:59:12.756417 1272199 kubeadm.go:322] 
	I0214 02:59:12.756471 1272199 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0214 02:59:12.756482 1272199 kubeadm.go:322] 
	I0214 02:59:12.756527 1272199 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 02:59:12.756535 1272199 kubeadm.go:322] 
	I0214 02:59:12.756584 1272199 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0214 02:59:12.756657 1272199 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 02:59:12.756724 1272199 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 02:59:12.756733 1272199 kubeadm.go:322] 
	I0214 02:59:12.756990 1272199 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 02:59:12.757073 1272199 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0214 02:59:12.757083 1272199 kubeadm.go:322] 
	I0214 02:59:12.757326 1272199 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token kujr48.fcyskmfun45nfb5p \
	I0214 02:59:12.757434 1272199 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:09ff10076c39b3ceb6e5878c674bc11df5aa3639e198c3f6e30e096135e90185 \
	I0214 02:59:12.757622 1272199 kubeadm.go:322] 	--control-plane 
	I0214 02:59:12.757640 1272199 kubeadm.go:322] 
	I0214 02:59:12.757888 1272199 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0214 02:59:12.757903 1272199 kubeadm.go:322] 
	I0214 02:59:12.758151 1272199 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token kujr48.fcyskmfun45nfb5p \
	I0214 02:59:12.758398 1272199 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:09ff10076c39b3ceb6e5878c674bc11df5aa3639e198c3f6e30e096135e90185 
	I0214 02:59:12.762955 1272199 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0214 02:59:12.763079 1272199 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 02:59:12.764316 1272199 cni.go:84] Creating CNI manager for ""
	I0214 02:59:12.764364 1272199 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0214 02:59:12.768513 1272199 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0214 02:59:12.770255 1272199 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0214 02:59:12.784725 1272199 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0214 02:59:12.816298 1272199 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 02:59:12.816438 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:12.816517 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a5eca87e70081d242c0fa2e2466e3725e217444d minikube.k8s.io/name=addons-565438 minikube.k8s.io/updated_at=2024_02_14T02_59_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:13.201030 1272199 ops.go:34] apiserver oom_adj: -16
	I0214 02:59:13.201126 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:13.701834 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:14.201382 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:14.701905 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:15.201874 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:15.701192 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:16.202100 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:16.702100 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:17.201937 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:17.701682 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:18.202217 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:18.701742 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:19.202190 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:19.702256 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:20.201929 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:20.702011 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:21.202217 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:21.701577 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:22.201862 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:22.702048 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:23.201919 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:23.701297 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:24.201306 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:24.702017 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:25.201266 1272199 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 02:59:25.374191 1272199 kubeadm.go:1088] duration metric: took 12.557813873s to wait for elevateKubeSystemPrivileges.
	I0214 02:59:25.374216 1272199 kubeadm.go:406] StartCluster complete in 28.01777455s
	I0214 02:59:25.374236 1272199 settings.go:142] acquiring lock: {Name:mka5ccfc6e6b301490609b4401d47e44477d3784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:59:25.374742 1272199 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18165-1266022/kubeconfig
	I0214 02:59:25.375189 1272199 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/kubeconfig: {Name:mk66f7cad9af599b8ab92f8fcd3383675b5457c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:59:25.377473 1272199 config.go:182] Loaded profile config "addons-565438": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0214 02:59:25.377520 1272199 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 02:59:25.377761 1272199 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0214 02:59:25.377952 1272199 addons.go:69] Setting yakd=true in profile "addons-565438"
	I0214 02:59:25.377968 1272199 addons.go:234] Setting addon yakd=true in "addons-565438"
	I0214 02:59:25.378020 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.378462 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.378945 1272199 addons.go:69] Setting cloud-spanner=true in profile "addons-565438"
	I0214 02:59:25.378960 1272199 addons.go:234] Setting addon cloud-spanner=true in "addons-565438"
	I0214 02:59:25.378995 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.379378 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.379796 1272199 addons.go:69] Setting metrics-server=true in profile "addons-565438"
	I0214 02:59:25.379814 1272199 addons.go:234] Setting addon metrics-server=true in "addons-565438"
	I0214 02:59:25.379842 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.380214 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.380713 1272199 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-565438"
	I0214 02:59:25.380748 1272199 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-565438"
	I0214 02:59:25.380783 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.381139 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.381948 1272199 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-565438"
	I0214 02:59:25.381975 1272199 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-565438"
	I0214 02:59:25.382010 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.382394 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.383509 1272199 addons.go:69] Setting default-storageclass=true in profile "addons-565438"
	I0214 02:59:25.383527 1272199 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-565438"
	I0214 02:59:25.383794 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.393554 1272199 addons.go:69] Setting registry=true in profile "addons-565438"
	I0214 02:59:25.393581 1272199 addons.go:234] Setting addon registry=true in "addons-565438"
	I0214 02:59:25.393633 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.397683 1272199 addons.go:69] Setting gcp-auth=true in profile "addons-565438"
	I0214 02:59:25.397717 1272199 mustload.go:65] Loading cluster: addons-565438
	I0214 02:59:25.397908 1272199 config.go:182] Loaded profile config "addons-565438": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0214 02:59:25.398147 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.398472 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.415739 1272199 addons.go:69] Setting ingress=true in profile "addons-565438"
	I0214 02:59:25.415770 1272199 addons.go:234] Setting addon ingress=true in "addons-565438"
	I0214 02:59:25.415825 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.416273 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.423825 1272199 addons.go:69] Setting storage-provisioner=true in profile "addons-565438"
	I0214 02:59:25.423854 1272199 addons.go:234] Setting addon storage-provisioner=true in "addons-565438"
	I0214 02:59:25.423903 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.424325 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.438622 1272199 addons.go:69] Setting ingress-dns=true in profile "addons-565438"
	I0214 02:59:25.438651 1272199 addons.go:234] Setting addon ingress-dns=true in "addons-565438"
	I0214 02:59:25.438705 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.440703 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.442657 1272199 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-565438"
	I0214 02:59:25.442684 1272199 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-565438"
	I0214 02:59:25.442989 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.455729 1272199 addons.go:69] Setting inspektor-gadget=true in profile "addons-565438"
	I0214 02:59:25.455766 1272199 addons.go:234] Setting addon inspektor-gadget=true in "addons-565438"
	I0214 02:59:25.455813 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.456257 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.479847 1272199 addons.go:69] Setting volumesnapshots=true in profile "addons-565438"
	I0214 02:59:25.479876 1272199 addons.go:234] Setting addon volumesnapshots=true in "addons-565438"
	I0214 02:59:25.479934 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.480361 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.587773 1272199 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0214 02:59:25.589342 1272199 addons.go:234] Setting addon default-storageclass=true in "addons-565438"
	I0214 02:59:25.593367 1272199 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0214 02:59:25.595260 1272199 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0214 02:59:25.597351 1272199 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0214 02:59:25.600260 1272199 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0214 02:59:25.607806 1272199 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0214 02:59:25.613124 1272199 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0214 02:59:25.629726 1272199 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0214 02:59:25.631955 1272199 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0214 02:59:25.633917 1272199 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0214 02:59:25.633943 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0214 02:59:25.634020 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.610325 1272199 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0214 02:59:25.610335 1272199 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0214 02:59:25.610341 1272199 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0214 02:59:25.610359 1272199 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0214 02:59:25.610388 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.610822 1272199 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 02:59:25.643880 1272199 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 02:59:25.643955 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0214 02:59:25.644643 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.674251 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.675795 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.678145 1272199 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0214 02:59:25.678194 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0214 02:59:25.678270 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.688621 1272199 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0214 02:59:25.691414 1272199 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0214 02:59:25.691437 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0214 02:59:25.691500 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.701386 1272199 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0214 02:59:25.704966 1272199 out.go:177]   - Using image docker.io/registry:2.8.3
	I0214 02:59:25.706633 1272199 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0214 02:59:25.706652 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0214 02:59:25.706719 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.706754 1272199 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0214 02:59:25.706770 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0214 02:59:25.706824 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.717074 1272199 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 02:59:25.717098 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 02:59:25.717166 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.720153 1272199 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 02:59:25.720172 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0214 02:59:25.720235 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.733237 1272199 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0214 02:59:25.731480 1272199 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-565438"
	I0214 02:59:25.731710 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:25.738169 1272199 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0214 02:59:25.735767 1272199 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0214 02:59:25.735803 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:25.740714 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:25.740870 1272199 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0214 02:59:25.744148 1272199 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0214 02:59:25.741125 1272199 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 02:59:25.748461 1272199 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0214 02:59:25.748477 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0214 02:59:25.748543 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.746723 1272199 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 02:59:25.755810 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0214 02:59:25.755881 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.746734 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0214 02:59:25.799943 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.848652 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:25.876052 1272199 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 02:59:25.876076 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 02:59:25.876139 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:25.900400 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:25.903552 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:25.871502 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:25.916867 1272199 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-565438" context rescaled to 1 replicas
	I0214 02:59:25.916912 1272199 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0214 02:59:25.927732 1272199 out.go:177] * Verifying Kubernetes components...
	I0214 02:59:25.933995 1272199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 02:59:25.934208 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:25.948815 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:25.964262 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:25.987944 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:26.000395 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:26.014999 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:26.022656 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	W0214 02:59:26.024926 1272199 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0214 02:59:26.024964 1272199 retry.go:31] will retry after 146.654993ms: ssh: handshake failed: EOF
	I0214 02:59:26.032775 1272199 out.go:177]   - Using image docker.io/busybox:stable
	I0214 02:59:26.035159 1272199 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0214 02:59:26.037624 1272199 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 02:59:26.037648 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0214 02:59:26.037715 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:26.064174 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:26.300602 1272199 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0214 02:59:26.300673 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0214 02:59:26.315862 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0214 02:59:26.339213 1272199 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0214 02:59:26.339280 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0214 02:59:26.535814 1272199 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0214 02:59:26.535845 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0214 02:59:26.538430 1272199 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0214 02:59:26.538461 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0214 02:59:26.548621 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0214 02:59:26.568704 1272199 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0214 02:59:26.568730 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0214 02:59:26.587322 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 02:59:26.590077 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0214 02:59:26.592310 1272199 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0214 02:59:26.592341 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0214 02:59:26.606112 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 02:59:26.791975 1272199 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0214 02:59:26.792002 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0214 02:59:26.808133 1272199 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0214 02:59:26.808160 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0214 02:59:26.816192 1272199 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0214 02:59:26.816218 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0214 02:59:26.831932 1272199 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0214 02:59:26.831958 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0214 02:59:26.968190 1272199 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0214 02:59:26.968228 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0214 02:59:26.974950 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0214 02:59:26.980270 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0214 02:59:27.046931 1272199 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0214 02:59:27.046959 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0214 02:59:27.143608 1272199 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0214 02:59:27.143636 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0214 02:59:27.186002 1272199 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0214 02:59:27.186037 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0214 02:59:27.194163 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0214 02:59:27.234243 1272199 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0214 02:59:27.234278 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0214 02:59:27.349007 1272199 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0214 02:59:27.349035 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0214 02:59:27.386300 1272199 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0214 02:59:27.386326 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0214 02:59:27.519144 1272199 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0214 02:59:27.519170 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0214 02:59:27.535163 1272199 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0214 02:59:27.535189 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0214 02:59:27.683432 1272199 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 02:59:27.683459 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0214 02:59:27.738645 1272199 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0214 02:59:27.738684 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0214 02:59:27.757649 1272199 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0214 02:59:27.757674 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0214 02:59:27.799581 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0214 02:59:27.836932 1272199 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0214 02:59:27.836959 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0214 02:59:27.845221 1272199 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0214 02:59:27.845246 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0214 02:59:28.025833 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0214 02:59:28.153957 1272199 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0214 02:59:28.153986 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0214 02:59:28.157398 1272199 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0214 02:59:28.157432 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0214 02:59:28.183388 1272199 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 02:59:28.183412 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0214 02:59:28.443875 1272199 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0214 02:59:28.443902 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0214 02:59:28.487104 1272199 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0214 02:59:28.487138 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0214 02:59:28.505101 1272199 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0214 02:59:28.505126 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0214 02:59:28.542594 1272199 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 02:59:28.542622 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0214 02:59:28.555838 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0214 02:59:28.564232 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 02:59:28.757939 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0214 02:59:29.384836 1272199 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.740167866s)
	I0214 02:59:29.384892 1272199 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.450868409s)
	I0214 02:59:29.385746 1272199 node_ready.go:35] waiting up to 6m0s for node "addons-565438" to be "Ready" ...
	I0214 02:59:29.385939 1272199 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0214 02:59:29.389580 1272199 node_ready.go:49] node "addons-565438" has status "Ready":"True"
	I0214 02:59:29.389610 1272199 node_ready.go:38] duration metric: took 3.832782ms waiting for node "addons-565438" to be "Ready" ...
	I0214 02:59:29.389630 1272199 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 02:59:29.398667 1272199 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zldht" in "kube-system" namespace to be "Ready" ...
	I0214 02:59:29.709612 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.393666455s)
	I0214 02:59:31.463042 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:32.287856 1272199 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0214 02:59:32.288013 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:32.312667 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:33.418868 1272199 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0214 02:59:33.786844 1272199 addons.go:234] Setting addon gcp-auth=true in "addons-565438"
	I0214 02:59:33.786908 1272199 host.go:66] Checking if "addons-565438" exists ...
	I0214 02:59:33.787421 1272199 cli_runner.go:164] Run: docker container inspect addons-565438 --format={{.State.Status}}
	I0214 02:59:33.817498 1272199 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0214 02:59:33.817549 1272199 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-565438
	I0214 02:59:33.841316 1272199 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34054 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/addons-565438/id_rsa Username:docker}
	I0214 02:59:33.910732 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:35.446266 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.897609737s)
	I0214 02:59:35.446311 1272199 addons.go:470] Verifying addon ingress=true in "addons-565438"
	I0214 02:59:35.448425 1272199 out.go:177] * Verifying ingress addon...
	I0214 02:59:35.446496 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.859149125s)
	I0214 02:59:35.446539 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.856432718s)
	I0214 02:59:35.446569 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.840427732s)
	I0214 02:59:35.446617 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.471641597s)
	I0214 02:59:35.446652 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.466359083s)
	I0214 02:59:35.446695 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.252506483s)
	I0214 02:59:35.446800 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.420937768s)
	I0214 02:59:35.446865 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.890996798s)
	I0214 02:59:35.446939 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.88258605s)
	I0214 02:59:35.446751 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.647143631s)
	I0214 02:59:35.451639 1272199 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0214 02:59:35.453728 1272199 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-565438 service yakd-dashboard -n yakd-dashboard
	
	I0214 02:59:35.451973 1272199 addons.go:470] Verifying addon metrics-server=true in "addons-565438"
	W0214 02:59:35.452011 1272199 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 02:59:35.452024 1272199 addons.go:470] Verifying addon registry=true in "addons-565438"
	I0214 02:59:35.456316 1272199 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0214 02:59:35.456943 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:35.456972 1272199 retry.go:31] will retry after 142.617079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0214 02:59:35.459064 1272199 out.go:177] * Verifying registry addon...
	I0214 02:59:35.461752 1272199 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0214 02:59:35.471148 1272199 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0214 02:59:35.488005 1272199 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0214 02:59:35.488031 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:35.600279 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0214 02:59:35.928470 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:35.980218 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:35.998780 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:36.470666 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:36.510929 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:36.913977 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.1559891s)
	I0214 02:59:36.914116 1272199 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-565438"
	I0214 02:59:36.916647 1272199 out.go:177] * Verifying csi-hostpath-driver addon...
	I0214 02:59:36.914070 1272199 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.096553113s)
	I0214 02:59:36.919179 1272199 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0214 02:59:36.921348 1272199 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0214 02:59:36.919985 1272199 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0214 02:59:36.923833 1272199 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0214 02:59:36.923854 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0214 02:59:36.933635 1272199 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0214 02:59:36.933661 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:36.955969 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:36.966583 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:37.125315 1272199 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0214 02:59:37.125380 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0214 02:59:37.189619 1272199 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 02:59:37.189698 1272199 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0214 02:59:37.279489 1272199 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0214 02:59:37.430510 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:37.458057 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:37.472951 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:37.849250 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.248906879s)
	I0214 02:59:37.930652 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:37.956238 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:37.967627 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:38.414245 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:38.444725 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:38.456957 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:38.468091 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:38.640813 1272199 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.361270872s)
	I0214 02:59:38.644065 1272199 addons.go:470] Verifying addon gcp-auth=true in "addons-565438"
	I0214 02:59:38.646438 1272199 out.go:177] * Verifying gcp-auth addon...
	I0214 02:59:38.649175 1272199 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0214 02:59:38.652709 1272199 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0214 02:59:38.652734 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:38.930538 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:38.958375 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:38.967483 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:39.154314 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:39.429831 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:39.456332 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:39.467155 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:39.653089 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:39.930234 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:39.957065 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:39.967095 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:40.153167 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:40.429772 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:40.457757 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:40.466347 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:40.653273 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:40.908184 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:40.932774 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:40.960150 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:40.968323 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:41.154390 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:41.429185 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:41.456623 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:41.467533 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:41.653430 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:41.929773 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:41.956213 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:41.968418 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:42.154705 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:42.430101 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:42.456617 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:42.467685 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:42.653641 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:42.931809 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:42.958383 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:42.967108 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:43.153796 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:43.406302 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:43.445117 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:43.461538 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:43.475265 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:43.653544 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:43.930891 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:43.956414 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:43.968178 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:44.153845 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:44.429037 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:44.457760 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:44.467748 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:44.654373 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:44.930202 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:44.956296 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:44.967305 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:45.154183 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:45.406553 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:45.428920 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:45.456294 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:45.467676 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:45.653636 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:45.929265 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:45.956321 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:45.966855 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:46.152777 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:46.429759 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:46.456937 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:46.467078 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:46.653954 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:46.929767 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:46.956882 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:46.966156 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:47.153048 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:47.430677 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:47.455845 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:47.466659 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:47.653615 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:47.905607 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:47.930326 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:47.956635 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:47.966557 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:48.154015 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:48.428690 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:48.456810 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:48.466800 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:48.653421 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:48.929234 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:48.956563 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:48.967593 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:49.153342 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:49.429752 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:49.456082 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:49.466535 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:49.653839 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:49.905861 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:49.929812 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:49.956459 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:49.967326 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:50.154216 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:50.431370 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:50.457187 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:50.467028 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:50.655220 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:50.930376 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:50.957397 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:50.966997 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:51.153948 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:51.429921 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:51.455943 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:51.467121 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:51.659272 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:51.905958 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:51.929294 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:51.957155 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:51.967617 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:52.153093 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:52.431111 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:52.457181 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:52.467021 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:52.653149 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:52.928999 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:52.956916 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:52.966902 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:53.153988 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:53.430801 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:53.456712 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:53.472836 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:53.653102 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:53.929711 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:53.957393 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:53.967063 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:54.156739 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:54.405834 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:54.429870 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:54.458776 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:54.466746 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:54.653627 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:54.933015 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:54.956508 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:54.967939 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:55.153107 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:55.429159 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:55.456728 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:55.466680 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:55.661398 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:55.931085 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:55.956161 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:55.966781 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:56.153205 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:56.407810 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:56.429666 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:56.456778 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:56.469568 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0214 02:59:56.652978 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:56.929533 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:56.957095 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:56.966664 1272199 kapi.go:107] duration metric: took 21.504903207s to wait for kubernetes.io/minikube-addons=registry ...
	I0214 02:59:57.153333 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:57.430065 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:57.456987 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:57.654147 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:57.930870 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:57.957289 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:58.155964 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:58.429335 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:58.456356 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:58.653541 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:58.905090 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 02:59:58.930483 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:58.957260 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:59.153043 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:59.431693 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:59.457145 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 02:59:59.652981 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 02:59:59.931627 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 02:59:59.956993 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:00.177546 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:00.442835 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:00.464583 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:00.682513 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:00.909177 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 03:00:00.933323 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:00.962616 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:01.156665 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:01.430708 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:01.459947 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:01.654001 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:01.931272 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:01.957863 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:02.156820 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:02.429993 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:02.456813 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:02.653890 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:02.936172 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:02.956718 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:03.154125 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:03.405673 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 03:00:03.429757 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:03.457074 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:03.652780 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:03.929450 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:03.956978 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:04.152925 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:04.430588 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:04.457769 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:04.653342 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:04.929796 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:04.957629 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:05.154074 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:05.406088 1272199 pod_ready.go:102] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"False"
	I0214 03:00:05.430666 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:05.458588 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:05.654577 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:05.930454 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:05.957152 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:06.153792 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:06.405945 1272199 pod_ready.go:92] pod "coredns-5dd5756b68-zldht" in "kube-system" namespace has status "Ready":"True"
	I0214 03:00:06.406014 1272199 pod_ready.go:81] duration metric: took 37.007314996s waiting for pod "coredns-5dd5756b68-zldht" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.406040 1272199 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-565438" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.411977 1272199 pod_ready.go:92] pod "etcd-addons-565438" in "kube-system" namespace has status "Ready":"True"
	I0214 03:00:06.412044 1272199 pod_ready.go:81] duration metric: took 5.982148ms waiting for pod "etcd-addons-565438" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.412068 1272199 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-565438" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.419508 1272199 pod_ready.go:92] pod "kube-apiserver-addons-565438" in "kube-system" namespace has status "Ready":"True"
	I0214 03:00:06.419568 1272199 pod_ready.go:81] duration metric: took 7.476523ms waiting for pod "kube-apiserver-addons-565438" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.419602 1272199 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-565438" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.431128 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:06.431804 1272199 pod_ready.go:92] pod "kube-controller-manager-addons-565438" in "kube-system" namespace has status "Ready":"True"
	I0214 03:00:06.431856 1272199 pod_ready.go:81] duration metric: took 12.23372ms waiting for pod "kube-controller-manager-addons-565438" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.431898 1272199 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bzb4x" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.438255 1272199 pod_ready.go:92] pod "kube-proxy-bzb4x" in "kube-system" namespace has status "Ready":"True"
	I0214 03:00:06.438328 1272199 pod_ready.go:81] duration metric: took 6.404257ms waiting for pod "kube-proxy-bzb4x" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.438367 1272199 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-565438" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.456242 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:06.653071 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:06.803022 1272199 pod_ready.go:92] pod "kube-scheduler-addons-565438" in "kube-system" namespace has status "Ready":"True"
	I0214 03:00:06.803056 1272199 pod_ready.go:81] duration metric: took 364.648288ms waiting for pod "kube-scheduler-addons-565438" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.803069 1272199 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-784lh" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:06.932877 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:06.957971 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:07.154127 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:07.204435 1272199 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-784lh" in "kube-system" namespace has status "Ready":"True"
	I0214 03:00:07.204467 1272199 pod_ready.go:81] duration metric: took 401.388553ms waiting for pod "nvidia-device-plugin-daemonset-784lh" in "kube-system" namespace to be "Ready" ...
	I0214 03:00:07.204477 1272199 pod_ready.go:38] duration metric: took 37.814833263s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 03:00:07.204498 1272199 api_server.go:52] waiting for apiserver process to appear ...
	I0214 03:00:07.204573 1272199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 03:00:07.223649 1272199 api_server.go:72] duration metric: took 41.306707547s to wait for apiserver process to appear ...
	I0214 03:00:07.223691 1272199 api_server.go:88] waiting for apiserver healthz status ...
	I0214 03:00:07.223712 1272199 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0214 03:00:07.234001 1272199 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0214 03:00:07.235505 1272199 api_server.go:141] control plane version: v1.28.4
	I0214 03:00:07.235534 1272199 api_server.go:131] duration metric: took 11.835568ms to wait for apiserver health ...
	I0214 03:00:07.235543 1272199 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 03:00:07.411756 1272199 system_pods.go:59] 17 kube-system pods found
	I0214 03:00:07.411789 1272199 system_pods.go:61] "coredns-5dd5756b68-zldht" [345ecd07-2b2f-4579-9666-ce49392fcfb2] Running
	I0214 03:00:07.411800 1272199 system_pods.go:61] "csi-hostpath-attacher-0" [65079fee-ef55-4637-9a8c-788e0318ccae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 03:00:07.411810 1272199 system_pods.go:61] "csi-hostpath-resizer-0" [fec2647c-130d-42c1-8b94-8ad5eab2610b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 03:00:07.411818 1272199 system_pods.go:61] "csi-hostpathplugin-mnt6r" [e0765358-b75c-4cc1-9adb-d1133964c076] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 03:00:07.411827 1272199 system_pods.go:61] "etcd-addons-565438" [a5ce36d4-775c-40fc-946e-dd104e1a388e] Running
	I0214 03:00:07.411833 1272199 system_pods.go:61] "kube-apiserver-addons-565438" [71c3cdf0-cca0-49ba-82b6-af9f48fc2dee] Running
	I0214 03:00:07.411844 1272199 system_pods.go:61] "kube-controller-manager-addons-565438" [7dfdd6c6-69a3-41ef-bcfa-9a6a3465340d] Running
	I0214 03:00:07.411852 1272199 system_pods.go:61] "kube-ingress-dns-minikube" [e59d2f9f-feb4-479e-a76a-bfd609f76604] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 03:00:07.411862 1272199 system_pods.go:61] "kube-proxy-bzb4x" [3a762368-e150-40d4-afe1-5e8ea1f6d231] Running
	I0214 03:00:07.411867 1272199 system_pods.go:61] "kube-scheduler-addons-565438" [0767d916-7d6c-4214-9664-4e27a34495e3] Running
	I0214 03:00:07.411872 1272199 system_pods.go:61] "metrics-server-69cf46c98-t6ph7" [26ff7f7f-fc20-4e02-a4ec-20b2ee3cd0ad] Running
	I0214 03:00:07.411883 1272199 system_pods.go:61] "nvidia-device-plugin-daemonset-784lh" [13521b17-0908-4b48-bc8e-8cc5352d8e8f] Running
	I0214 03:00:07.411888 1272199 system_pods.go:61] "registry-cdg2k" [5d4d93ef-1e75-4b2a-994e-2292fc6b92a2] Running
	I0214 03:00:07.411893 1272199 system_pods.go:61] "registry-proxy-r45hf" [bc7041d8-bd4a-4d32-9849-568b3e754650] Running
	I0214 03:00:07.411898 1272199 system_pods.go:61] "snapshot-controller-58dbcc7b99-gxtn8" [03d33701-f8dc-4e98-bfd8-d7b7088379bc] Running
	I0214 03:00:07.411903 1272199 system_pods.go:61] "snapshot-controller-58dbcc7b99-w7gf2" [27fc765e-7ef0-492c-bb25-c1b6dc53df7e] Running
	I0214 03:00:07.411912 1272199 system_pods.go:61] "storage-provisioner" [4d5074ed-e43b-4623-bef2-bcbf1c490d16] Running
	I0214 03:00:07.411919 1272199 system_pods.go:74] duration metric: took 176.362103ms to wait for pod list to return data ...
	I0214 03:00:07.411930 1272199 default_sa.go:34] waiting for default service account to be created ...
	I0214 03:00:07.430379 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:07.457674 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:07.604636 1272199 default_sa.go:45] found service account: "default"
	I0214 03:00:07.604682 1272199 default_sa.go:55] duration metric: took 192.744814ms for default service account to be created ...
	I0214 03:00:07.604698 1272199 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 03:00:07.653880 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:07.810053 1272199 system_pods.go:86] 17 kube-system pods found
	I0214 03:00:07.810134 1272199 system_pods.go:89] "coredns-5dd5756b68-zldht" [345ecd07-2b2f-4579-9666-ce49392fcfb2] Running
	I0214 03:00:07.810163 1272199 system_pods.go:89] "csi-hostpath-attacher-0" [65079fee-ef55-4637-9a8c-788e0318ccae] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0214 03:00:07.810188 1272199 system_pods.go:89] "csi-hostpath-resizer-0" [fec2647c-130d-42c1-8b94-8ad5eab2610b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0214 03:00:07.810230 1272199 system_pods.go:89] "csi-hostpathplugin-mnt6r" [e0765358-b75c-4cc1-9adb-d1133964c076] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0214 03:00:07.810250 1272199 system_pods.go:89] "etcd-addons-565438" [a5ce36d4-775c-40fc-946e-dd104e1a388e] Running
	I0214 03:00:07.810270 1272199 system_pods.go:89] "kube-apiserver-addons-565438" [71c3cdf0-cca0-49ba-82b6-af9f48fc2dee] Running
	I0214 03:00:07.810291 1272199 system_pods.go:89] "kube-controller-manager-addons-565438" [7dfdd6c6-69a3-41ef-bcfa-9a6a3465340d] Running
	I0214 03:00:07.810327 1272199 system_pods.go:89] "kube-ingress-dns-minikube" [e59d2f9f-feb4-479e-a76a-bfd609f76604] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0214 03:00:07.810358 1272199 system_pods.go:89] "kube-proxy-bzb4x" [3a762368-e150-40d4-afe1-5e8ea1f6d231] Running
	I0214 03:00:07.810379 1272199 system_pods.go:89] "kube-scheduler-addons-565438" [0767d916-7d6c-4214-9664-4e27a34495e3] Running
	I0214 03:00:07.810407 1272199 system_pods.go:89] "metrics-server-69cf46c98-t6ph7" [26ff7f7f-fc20-4e02-a4ec-20b2ee3cd0ad] Running
	I0214 03:00:07.810442 1272199 system_pods.go:89] "nvidia-device-plugin-daemonset-784lh" [13521b17-0908-4b48-bc8e-8cc5352d8e8f] Running
	I0214 03:00:07.810464 1272199 system_pods.go:89] "registry-cdg2k" [5d4d93ef-1e75-4b2a-994e-2292fc6b92a2] Running
	I0214 03:00:07.810485 1272199 system_pods.go:89] "registry-proxy-r45hf" [bc7041d8-bd4a-4d32-9849-568b3e754650] Running
	I0214 03:00:07.810515 1272199 system_pods.go:89] "snapshot-controller-58dbcc7b99-gxtn8" [03d33701-f8dc-4e98-bfd8-d7b7088379bc] Running
	I0214 03:00:07.810535 1272199 system_pods.go:89] "snapshot-controller-58dbcc7b99-w7gf2" [27fc765e-7ef0-492c-bb25-c1b6dc53df7e] Running
	I0214 03:00:07.810552 1272199 system_pods.go:89] "storage-provisioner" [4d5074ed-e43b-4623-bef2-bcbf1c490d16] Running
	I0214 03:00:07.810574 1272199 system_pods.go:126] duration metric: took 205.850346ms to wait for k8s-apps to be running ...
	I0214 03:00:07.810594 1272199 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 03:00:07.810682 1272199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:00:07.824122 1272199 system_svc.go:56] duration metric: took 13.520446ms WaitForService to wait for kubelet.
	I0214 03:00:07.824158 1272199 kubeadm.go:581] duration metric: took 41.907222838s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0214 03:00:07.824180 1272199 node_conditions.go:102] verifying NodePressure condition ...
	I0214 03:00:07.931232 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:07.956950 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:08.009656 1272199 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 03:00:08.009692 1272199 node_conditions.go:123] node cpu capacity is 2
	I0214 03:00:08.009706 1272199 node_conditions.go:105] duration metric: took 185.520231ms to run NodePressure ...
	I0214 03:00:08.009719 1272199 start.go:228] waiting for startup goroutines ...
	I0214 03:00:08.153491 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:08.430547 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:08.457628 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:08.653707 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:08.948948 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:08.956755 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:09.153583 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:09.429897 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:09.457079 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:09.652807 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:09.929258 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:09.957219 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:10.155481 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:10.430886 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:10.456661 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:10.653423 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:10.930210 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:10.956468 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:11.153720 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:11.431385 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:11.458950 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:11.654801 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:11.929753 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:11.956519 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:12.153685 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:12.430032 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:12.456880 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:12.653708 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:12.929736 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:12.959481 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:13.153634 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:13.430048 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:13.457049 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:13.653246 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:13.930818 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:13.957634 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:14.153996 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:14.430072 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:14.456625 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:14.653320 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:14.931042 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:14.956308 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:15.154526 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:15.429182 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:15.456775 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:15.656155 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:15.929535 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:15.956799 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:16.153512 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:16.429838 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:16.456671 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:16.653693 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:16.930085 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:16.957510 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:17.155826 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:17.429059 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:17.456086 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:17.655353 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:17.929028 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:17.956426 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:18.154498 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:18.431947 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:18.471734 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:18.653096 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:18.930360 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:18.957015 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:19.153110 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:19.429032 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:19.456618 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:19.657479 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:19.929258 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:19.957723 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:20.155228 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:20.429906 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:20.457860 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:20.653568 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:20.932328 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:20.957392 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:21.154055 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:21.429879 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:21.456483 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:21.653345 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:21.928985 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:21.956659 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:22.153466 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:22.428762 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:22.455988 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:22.655489 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:22.929290 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:22.956995 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:23.153764 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:23.430163 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:23.457688 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:23.653799 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:23.932620 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:23.959492 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:24.153354 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:24.430512 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:24.456953 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:24.653260 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:24.929625 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:24.955766 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:25.154328 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:25.435902 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:25.457327 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:25.652954 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:25.929480 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:25.956836 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:26.153767 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:26.429557 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:26.458146 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:26.652950 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:26.929375 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:26.957038 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:27.152657 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:27.429287 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:27.456532 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:27.653027 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:27.934634 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0214 03:00:27.955936 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:28.152972 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:28.430143 1272199 kapi.go:107] duration metric: took 51.510154069s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0214 03:00:28.456445 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:28.653235 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:28.956770 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:29.153621 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:29.456555 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:29.653351 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:29.956697 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:30.154284 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:30.456719 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:30.653641 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:30.956931 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:31.152990 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:31.456189 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:31.654365 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:31.956549 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:32.154006 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:32.456919 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:32.652840 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:32.955991 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:33.153615 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:33.456545 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:33.653348 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:33.955971 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:34.153693 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:34.456390 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:34.652937 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:34.956709 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:35.153561 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:35.457203 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:35.652731 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:35.957164 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:36.153428 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:36.456093 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:36.653857 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:36.956823 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:37.153683 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:37.456896 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:37.653673 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:37.956540 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:38.153133 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:38.456833 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:38.653490 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:38.956520 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:39.153315 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:39.456458 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:39.652813 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:39.956467 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:40.153732 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:40.456316 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:40.653203 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:40.957036 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:41.155305 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:41.456441 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:41.653285 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:41.957161 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:42.153898 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:42.457939 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:42.653704 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:42.957076 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:43.153217 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:43.456632 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:43.653326 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:43.956826 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:44.153288 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:44.456721 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:44.653357 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:44.956617 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:45.154988 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:45.459535 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:45.653715 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:45.956928 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:46.155085 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:46.461945 1272199 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0214 03:00:46.657282 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:46.962292 1272199 kapi.go:107] duration metric: took 1m11.510647017s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0214 03:00:47.154070 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:47.654101 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:48.153155 1272199 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0214 03:00:48.653620 1272199 kapi.go:107] duration metric: took 1m10.004445483s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0214 03:00:48.657156 1272199 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-565438 cluster.
	I0214 03:00:48.659523 1272199 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0214 03:00:48.662016 1272199 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0214 03:00:48.664124 1272199 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, inspektor-gadget, ingress-dns, yakd, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0214 03:00:48.666553 1272199 addons.go:505] enable addons completed in 1m23.288790148s: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin inspektor-gadget ingress-dns yakd metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0214 03:00:48.666600 1272199 start.go:233] waiting for cluster config update ...
	I0214 03:00:48.666645 1272199 start.go:242] writing updated cluster config ...
	I0214 03:00:48.667014 1272199 ssh_runner.go:195] Run: rm -f paused
	I0214 03:00:49.025138 1272199 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0214 03:00:49.027764 1272199 out.go:177] * Done! kubectl is now configured to use "addons-565438" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 14 03:01:43 addons-565438 dockerd[1097]: time="2024-02-14T03:01:43.324569201Z" level=info msg="ignoring event" container=9ef9a938c8d50f53617e65c4802fb3625b8436f53dd0837d0afae74d380fb7d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:01:44 addons-565438 cri-dockerd[1297]: time="2024-02-14T03:01:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1516be4ac3e831bdeb4eea20133b0d9f10bf0a5119a03eccc956fbb2103b501f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Feb 14 03:01:46 addons-565438 cri-dockerd[1297]: time="2024-02-14T03:01:46Z" level=info msg="Stop pulling image gcr.io/google-samples/hello-app:1.0: Status: Downloaded newer image for gcr.io/google-samples/hello-app:1.0"
	Feb 14 03:01:46 addons-565438 dockerd[1097]: time="2024-02-14T03:01:46.544644715Z" level=info msg="ignoring event" container=b887c84627b61a3ab21ce7d2a94b18979601e0acb67f567c6d2426c18afaf499 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:01:47 addons-565438 dockerd[1097]: time="2024-02-14T03:01:47.822609129Z" level=info msg="ignoring event" container=94701018f40c3ed73cdcb72f9dffbc5afb2498053000f2c7fd94290f9952eb35 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:01:58 addons-565438 cri-dockerd[1297]: time="2024-02-14T03:01:58Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b64c2fed32c550b4b126283fb6922cc387d0efd7596bd88edde607cd375d52e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Feb 14 03:01:59 addons-565438 cri-dockerd[1297]: time="2024-02-14T03:01:59Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Feb 14 03:02:00 addons-565438 dockerd[1097]: time="2024-02-14T03:02:00.538614219Z" level=info msg="ignoring event" container=9c2c58de663a89de9f27589d35484495be51152580a011bc7077a1134f49e12e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:04 addons-565438 dockerd[1097]: time="2024-02-14T03:02:04.615052858Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=14f591a9f2c0baa86ce66bb2b35f645eb5936c3b9642512c4bcf5fcaf29a1bc7
	Feb 14 03:02:04 addons-565438 dockerd[1097]: time="2024-02-14T03:02:04.691443570Z" level=info msg="ignoring event" container=14f591a9f2c0baa86ce66bb2b35f645eb5936c3b9642512c4bcf5fcaf29a1bc7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:04 addons-565438 dockerd[1097]: time="2024-02-14T03:02:04.815116041Z" level=info msg="ignoring event" container=303f17a6c40236cac0eca2cd87088efccc8acf16d8db75889229ec08509004af module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:04 addons-565438 dockerd[1097]: time="2024-02-14T03:02:04.959469307Z" level=info msg="ignoring event" container=5233196c9ea40e5571a952f68692b25a338a36d0fcf481a8e2b3625bec00e880 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:06 addons-565438 dockerd[1097]: time="2024-02-14T03:02:06.532596202Z" level=info msg="ignoring event" container=e783db0a81024f08d85491aa88d56efa25a9dc58056beb05afb54a417d5659d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:06 addons-565438 dockerd[1097]: time="2024-02-14T03:02:06.683075148Z" level=info msg="ignoring event" container=7b64c2fed32c550b4b126283fb6922cc387d0efd7596bd88edde607cd375d52e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:08 addons-565438 dockerd[1097]: time="2024-02-14T03:02:08.288754883Z" level=info msg="ignoring event" container=650ef4941ea9b35687d51a257a0f859ee2cb24da2094afaf8aa114e473275a73 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:08 addons-565438 dockerd[1097]: time="2024-02-14T03:02:08.423356747Z" level=info msg="ignoring event" container=4d6dd138c09675acfb2f6adc73c11d9543d058086e8da1ee5901c6743e55b5e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:08 addons-565438 dockerd[1097]: time="2024-02-14T03:02:08.438518387Z" level=info msg="ignoring event" container=e6e2cdbfb3c22684552c0c0816ef400dd03938fa8cd1137f7a460b146b48dcf0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:08 addons-565438 dockerd[1097]: time="2024-02-14T03:02:08.442440465Z" level=info msg="ignoring event" container=c932584529b6a02ab9e5dd3691e3d45db369ae9142e2f5ea067ce8adf498630c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:08 addons-565438 dockerd[1097]: time="2024-02-14T03:02:08.456630897Z" level=info msg="ignoring event" container=8319f57dd4a0119bdfb765190a89bbf9eb6c3c48dba7718751a92e565dd0c9d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:08 addons-565438 dockerd[1097]: time="2024-02-14T03:02:08.474399917Z" level=info msg="ignoring event" container=a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:08 addons-565438 dockerd[1097]: time="2024-02-14T03:02:08.485008423Z" level=info msg="ignoring event" container=efa473f4f7005e58a24cc27f72ebb28e2b325a5b5d133406128503f7cc8347d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:08 addons-565438 dockerd[1097]: time="2024-02-14T03:02:08.491740617Z" level=info msg="ignoring event" container=56c40a96fecea943101b66916a2deb1a5ce5114d71bea612e7039cf01dbcbcd0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:08 addons-565438 dockerd[1097]: time="2024-02-14T03:02:08.599053928Z" level=info msg="ignoring event" container=90b27523cdf4202a7f02e4ae334a6d6d4f4f6f274b9e15620b0d0e35958b9cca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:08 addons-565438 dockerd[1097]: time="2024-02-14T03:02:08.716926170Z" level=info msg="ignoring event" container=8245137ad706e6b089fe41b8230befcd4a7858f588640aa2a6ad4c692c4fa146 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:02:08 addons-565438 dockerd[1097]: time="2024-02-14T03:02:08.769930001Z" level=info msg="ignoring event" container=d3f7c1c4e3b8877481fb98fab968d319c63dd05ff070d89f1eca836962690468 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                         ATTEMPT             POD ID              POD
	5233196c9ea40       dd1b12fcb6097                                                                                                                6 seconds ago        Exited              hello-world-app              2                   1516be4ac3e83       hello-world-app-5d77478584-xqb8t
	ae136cc90089d       nginx@sha256:f2802c2a9d09c7aa3ace27445dfc5656ff24355da28e7b958074a0111e3fc076                                                33 seconds ago       Running             nginx                        0                   49ccd5679e47f       nginx
	abef73efa546b       fc9db2894f4e4                                                                                                                54 seconds ago       Exited              helper-pod                   0                   8a3c2cb8d7926       helper-pod-delete-pvc-a0ae79f4-863f-4a31-aca1-22c767dfc58a
	9cb8a1bfe0fc4       busybox@sha256:6d9ac9237a84afe1516540f40a0fafdc86859b2141954b4d643af7066d598b74                                              57 seconds ago       Exited              busybox                      0                   f7ee0683a6dff       test-local-path
	5d32d35063b6f       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              About a minute ago   Exited              helper-pod                   0                   c1310d585db80       helper-pod-create-pvc-a0ae79f4-863f-4a31-aca1-22c767dfc58a
	706f23cc9844d       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        About a minute ago   Running             headlamp                     0                   1577d5a02ef9b       headlamp-7ddfbb94ff-68xsh
	8e30ae2594b26       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                     0                   2dca46d749c58       gcp-auth-d4c87556c-f68tc
	c885722947f24       af594c6a879f2                                                                                                                About a minute ago   Exited              patch                        1                   bfaba20e069a3       ingress-nginx-admission-patch-btckw
	cebbc79323204       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   About a minute ago   Exited              create                       0                   710c966f83b42       ingress-nginx-admission-create-72dmk
	a14215aeb9643       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                        About a minute ago   Running             yakd                         0                   31c27794f29af       yakd-dashboard-9947fc6bf-q24wb
	c5c8f63dfe05f       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner       0                   05c1602df4908       local-path-provisioner-78b46b4d5c-s4nhq
	d3934ff06f253       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      2 minutes ago        Running             volume-snapshot-controller   0                   025fd7af7fa49       snapshot-controller-58dbcc7b99-gxtn8
	c46156f29fd86       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280      2 minutes ago        Running             volume-snapshot-controller   0                   23eeb2deb2e2b       snapshot-controller-58dbcc7b99-w7gf2
	427f050089e9e       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner          0                   cea7014e0615d       storage-provisioner
	43963f0ffa2f5       3ca3ca488cf13                                                                                                                2 minutes ago        Running             kube-proxy                   0                   5456bd071a14f       kube-proxy-bzb4x
	60cc24117d5ac       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                      0                   01767c3967d5a       coredns-5dd5756b68-zldht
	15ab186d231d3       05c284c929889                                                                                                                3 minutes ago        Running             kube-scheduler               0                   84bb48d479bcf       kube-scheduler-addons-565438
	322a3ea45c0c8       9cdd6470f48c8                                                                                                                3 minutes ago        Running             etcd                         0                   5ec5c4f5194b4       etcd-addons-565438
	6660c8c202553       04b4c447bb9d4                                                                                                                3 minutes ago        Running             kube-apiserver               0                   c829c9c7b62a2       kube-apiserver-addons-565438
	79214152cb7e6       9961cbceaf234                                                                                                                3 minutes ago        Running             kube-controller-manager      0                   71ba08bd1b231       kube-controller-manager-addons-565438
	
	
	==> coredns [60cc24117d5a] <==
	[INFO] 10.244.0.19:40060 - 38086 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000140599s
	[INFO] 10.244.0.19:59904 - 60380 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.020530816s
	[INFO] 10.244.0.19:40060 - 37416 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001421788s
	[INFO] 10.244.0.19:59904 - 17422 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002444029s
	[INFO] 10.244.0.19:59904 - 10005 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000112752s
	[INFO] 10.244.0.19:40060 - 27967 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001567515s
	[INFO] 10.244.0.19:40060 - 23272 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000058731s
	[INFO] 10.244.0.19:54128 - 23980 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000134798s
	[INFO] 10.244.0.19:54128 - 31048 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000089746s
	[INFO] 10.244.0.19:54128 - 8111 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000072048s
	[INFO] 10.244.0.19:39346 - 18184 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065607s
	[INFO] 10.244.0.19:54128 - 51840 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000096826s
	[INFO] 10.244.0.19:39346 - 18180 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000046194s
	[INFO] 10.244.0.19:54128 - 44124 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00016509s
	[INFO] 10.244.0.19:39346 - 48065 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000062079s
	[INFO] 10.244.0.19:54128 - 6999 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081868s
	[INFO] 10.244.0.19:39346 - 25481 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000040795s
	[INFO] 10.244.0.19:39346 - 48683 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000085791s
	[INFO] 10.244.0.19:39346 - 673 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000337615s
	[INFO] 10.244.0.19:54128 - 57382 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001370934s
	[INFO] 10.244.0.19:39346 - 41493 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001321589s
	[INFO] 10.244.0.19:54128 - 49413 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001497281s
	[INFO] 10.244.0.19:54128 - 1774 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000120136s
	[INFO] 10.244.0.19:39346 - 43700 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001101714s
	[INFO] 10.244.0.19:39346 - 8476 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000091369s
	
	
	==> describe nodes <==
	Name:               addons-565438
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-565438
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5eca87e70081d242c0fa2e2466e3725e217444d
	                    minikube.k8s.io/name=addons-565438
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T02_59_12_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-565438
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 02:59:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-565438
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 03:02:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 03:01:46 +0000   Wed, 14 Feb 2024 02:59:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 03:01:46 +0000   Wed, 14 Feb 2024 02:59:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 03:01:46 +0000   Wed, 14 Feb 2024 02:59:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 03:01:46 +0000   Wed, 14 Feb 2024 02:59:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-565438
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ef88b4ec4f04cb28ac5ce582af59c86
	  System UUID:                03da0e3f-8c9e-4a89-9098-95dc8454b2fc
	  Boot ID:                    0ec78279-ad11-40d5-8717-d4c1429371b1
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-xqb8t           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-d4c87556c-f68tc                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  headlamp                    headlamp-7ddfbb94ff-68xsh                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 coredns-5dd5756b68-zldht                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m45s
	  kube-system                 etcd-addons-565438                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-apiserver-addons-565438               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-controller-manager-addons-565438      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-proxy-bzb4x                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-scheduler-addons-565438               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 snapshot-controller-58dbcc7b99-gxtn8       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 snapshot-controller-58dbcc7b99-w7gf2       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  local-path-storage          local-path-provisioner-78b46b4d5c-s4nhq    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-q24wb             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     2m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (3%!)(MISSING)  426Mi (5%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m42s                kube-proxy       
	  Normal  Starting                 3m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m6s (x8 over 3m6s)  kubelet          Node addons-565438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x8 over 3m6s)  kubelet          Node addons-565438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x7 over 3m6s)  kubelet          Node addons-565438 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m58s                kubelet          Node addons-565438 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s                kubelet          Node addons-565438 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s                kubelet          Node addons-565438 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             2m58s                kubelet          Node addons-565438 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  2m57s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m57s                kubelet          Node addons-565438 status is now: NodeReady
	  Normal  RegisteredNode           2m45s                node-controller  Node addons-565438 event: Registered Node addons-565438 in Controller
	
	
	==> dmesg <==
	[  +0.001042] FS-Cache: N-key=[8] '52613b0000000000'
	[  +0.002693] FS-Cache: Duplicate cookie detected
	[  +0.000702] FS-Cache: O-cookie c=0000003c [p=00000039 fl=226 nc=0 na=1]
	[  +0.000995] FS-Cache: O-cookie d=00000000c82e8d73{9p.inode} n=00000000385eb087
	[  +0.001188] FS-Cache: O-key=[8] '52613b0000000000'
	[  +0.000854] FS-Cache: N-cookie c=00000043 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000933] FS-Cache: N-cookie d=00000000c82e8d73{9p.inode} n=00000000f899e452
	[  +0.001109] FS-Cache: N-key=[8] '52613b0000000000'
	[  +2.835177] FS-Cache: Duplicate cookie detected
	[  +0.000813] FS-Cache: O-cookie c=0000003a [p=00000039 fl=226 nc=0 na=1]
	[  +0.001008] FS-Cache: O-cookie d=00000000c82e8d73{9p.inode} n=000000004c5a19ad
	[  +0.001242] FS-Cache: O-key=[8] '51613b0000000000'
	[  +0.000783] FS-Cache: N-cookie c=00000045 [p=00000039 fl=2 nc=0 na=1]
	[  +0.000959] FS-Cache: N-cookie d=00000000c82e8d73{9p.inode} n=000000003212bd54
	[  +0.001059] FS-Cache: N-key=[8] '51613b0000000000'
	[  +0.297329] FS-Cache: Duplicate cookie detected
	[  +0.000840] FS-Cache: O-cookie c=0000003f [p=00000039 fl=226 nc=0 na=1]
	[  +0.001103] FS-Cache: O-cookie d=00000000c82e8d73{9p.inode} n=000000003fbe097e
	[  +0.001200] FS-Cache: O-key=[8] '57613b0000000000'
	[  +0.000796] FS-Cache: N-cookie c=00000046 [p=00000039 fl=2 nc=0 na=1]
	[  +0.001098] FS-Cache: N-cookie d=00000000c82e8d73{9p.inode} n=00000000b30ae34f
	[  +0.001175] FS-Cache: N-key=[8] '57613b0000000000'
	[Feb14 02:20] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
	[  +0.116829] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	[  +0.540873] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [322a3ea45c0c] <==
	{"level":"info","ts":"2024-02-14T02:59:05.665032Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-02-14T02:59:05.665505Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-02-14T02:59:05.665543Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T02:59:05.665563Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T02:59:05.665571Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T02:59:05.675081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-02-14T02:59:05.675193Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-02-14T02:59:06.127709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-14T02:59:06.127967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-14T02:59:06.128131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-02-14T02:59:06.128269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-02-14T02:59:06.128351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T02:59:06.128434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-02-14T02:59:06.128541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-02-14T02:59:06.131774Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:59:06.135925Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-565438 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T02:59:06.139812Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:59:06.13997Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T02:59:06.144406Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-02-14T02:59:06.139995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T02:59:06.140027Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T02:59:06.151881Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T02:59:06.160431Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T02:59:06.194643Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T02:59:06.224568Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [8e30ae2594b2] <==
	2024/02/14 03:00:48 GCP Auth Webhook started!
	2024/02/14 03:00:50 Ready to marshal response ...
	2024/02/14 03:00:50 Ready to write response ...
	2024/02/14 03:00:50 Ready to marshal response ...
	2024/02/14 03:00:50 Ready to write response ...
	2024/02/14 03:00:50 Ready to marshal response ...
	2024/02/14 03:00:50 Ready to write response ...
	2024/02/14 03:00:59 Ready to marshal response ...
	2024/02/14 03:00:59 Ready to write response ...
	2024/02/14 03:01:07 Ready to marshal response ...
	2024/02/14 03:01:07 Ready to write response ...
	2024/02/14 03:01:07 Ready to marshal response ...
	2024/02/14 03:01:07 Ready to write response ...
	2024/02/14 03:01:15 Ready to marshal response ...
	2024/02/14 03:01:15 Ready to write response ...
	2024/02/14 03:01:29 Ready to marshal response ...
	2024/02/14 03:01:29 Ready to write response ...
	2024/02/14 03:01:34 Ready to marshal response ...
	2024/02/14 03:01:34 Ready to write response ...
	2024/02/14 03:01:44 Ready to marshal response ...
	2024/02/14 03:01:44 Ready to write response ...
	2024/02/14 03:01:58 Ready to marshal response ...
	2024/02/14 03:01:58 Ready to write response ...
	
	
	==> kernel <==
	 03:02:10 up  5:44,  0 users,  load average: 3.11, 2.77, 2.39
	Linux addons-565438 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [6660c8c20255] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0214 02:59:36.642405       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.109.79.183"}
	I0214 02:59:36.655472       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0214 02:59:36.816830       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.103.233.15"}
	W0214 02:59:37.698520       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0214 02:59:38.407639       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.111.78.211"}
	W0214 03:00:02.897078       1 handler_proxy.go:93] no RequestInfo found in the context
	E0214 03:00:02.897147       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0214 03:00:02.898035       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0214 03:00:02.898132       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.29.96:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.29.96:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.29.96:443: connect: connection refused
	E0214 03:00:02.900379       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.29.96:443/apis/metrics.k8s.io/v1beta1: Get "https://10.100.29.96:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.100.29.96:443: connect: connection refused
	I0214 03:00:03.000768       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0214 03:00:09.364469       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0214 03:00:50.460438       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.144.153"}
	I0214 03:01:09.364137       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0214 03:01:22.763481       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0214 03:01:22.781513       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0214 03:01:23.806112       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0214 03:01:34.411170       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0214 03:01:34.752406       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.184.99"}
	I0214 03:01:41.967944       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0214 03:01:44.578347       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.190.22"}
	I0214 03:02:03.909915       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [79214152cb7e] <==
	E0214 03:01:32.867345       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0214 03:01:32.884605       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	I0214 03:01:33.753680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-69cf46c98" duration="8.698µs"
	W0214 03:01:41.047091       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 03:01:41.047124       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0214 03:01:44.026842       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0214 03:01:44.325382       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0214 03:01:44.340832       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-xqb8t"
	I0214 03:01:44.353944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="29.19561ms"
	I0214 03:01:44.374463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.456298ms"
	I0214 03:01:44.396473       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="21.938573ms"
	I0214 03:01:44.396601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="86.89µs"
	I0214 03:01:47.639101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="57.115µs"
	I0214 03:01:48.668510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="104.547µs"
	I0214 03:01:49.694375       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.83µs"
	I0214 03:01:55.227414       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0214 03:01:57.465159       1 event.go:307] "Event occurred" object="default/hpvc-restore" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0214 03:02:01.574992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="8.771µs"
	I0214 03:02:01.575227       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0214 03:02:01.579227       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0214 03:02:05.059448       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="73.623µs"
	W0214 03:02:05.764250       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0214 03:02:05.764283       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0214 03:02:08.200881       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-attacher"
	I0214 03:02:08.283234       1 stateful_set.go:458] "StatefulSet has been deleted" key="kube-system/csi-hostpath-resizer"
	
	
	==> kube-proxy [43963f0ffa2f] <==
	I0214 02:59:27.956093       1 server_others.go:69] "Using iptables proxy"
	I0214 02:59:27.977465       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0214 02:59:28.036271       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0214 02:59:28.042743       1 server_others.go:152] "Using iptables Proxier"
	I0214 02:59:28.042782       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0214 02:59:28.042790       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0214 02:59:28.042816       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0214 02:59:28.043045       1 server.go:846] "Version info" version="v1.28.4"
	I0214 02:59:28.043056       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0214 02:59:28.045599       1 config.go:188] "Starting service config controller"
	I0214 02:59:28.045616       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0214 02:59:28.045636       1 config.go:97] "Starting endpoint slice config controller"
	I0214 02:59:28.045641       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0214 02:59:28.046030       1 config.go:315] "Starting node config controller"
	I0214 02:59:28.046037       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0214 02:59:28.150156       1 shared_informer.go:318] Caches are synced for node config
	I0214 02:59:28.150187       1 shared_informer.go:318] Caches are synced for service config
	I0214 02:59:28.150212       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [15ab186d231d] <==
	W0214 02:59:09.670646       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0214 02:59:09.670678       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0214 02:59:09.670764       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0214 02:59:09.670784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0214 02:59:09.670874       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 02:59:09.670894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0214 02:59:09.670972       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0214 02:59:09.670990       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0214 02:59:09.671073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 02:59:09.671091       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0214 02:59:09.671173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0214 02:59:09.671213       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0214 02:59:09.671275       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 02:59:09.671289       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0214 02:59:09.671352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 02:59:09.671371       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0214 02:59:10.563954       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 02:59:10.563996       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0214 02:59:10.599734       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0214 02:59:10.599775       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0214 02:59:10.606434       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 02:59:10.606470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0214 02:59:10.676712       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 02:59:10.676832       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0214 02:59:11.257888       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.566536    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa"} err="failed to get container status \"a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa\": rpc error: code = Unknown desc = Error response from daemon: No such container: a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.566555    2304 scope.go:117] "RemoveContainer" containerID="c932584529b6a02ab9e5dd3691e3d45db369ae9142e2f5ea067ce8adf498630c"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.567220    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c932584529b6a02ab9e5dd3691e3d45db369ae9142e2f5ea067ce8adf498630c"} err="failed to get container status \"c932584529b6a02ab9e5dd3691e3d45db369ae9142e2f5ea067ce8adf498630c\": rpc error: code = Unknown desc = Error response from daemon: No such container: c932584529b6a02ab9e5dd3691e3d45db369ae9142e2f5ea067ce8adf498630c"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.567239    2304 scope.go:117] "RemoveContainer" containerID="8319f57dd4a0119bdfb765190a89bbf9eb6c3c48dba7718751a92e565dd0c9d7"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.567922    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8319f57dd4a0119bdfb765190a89bbf9eb6c3c48dba7718751a92e565dd0c9d7"} err="failed to get container status \"8319f57dd4a0119bdfb765190a89bbf9eb6c3c48dba7718751a92e565dd0c9d7\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8319f57dd4a0119bdfb765190a89bbf9eb6c3c48dba7718751a92e565dd0c9d7"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.567968    2304 scope.go:117] "RemoveContainer" containerID="e6e2cdbfb3c22684552c0c0816ef400dd03938fa8cd1137f7a460b146b48dcf0"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.568623    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e6e2cdbfb3c22684552c0c0816ef400dd03938fa8cd1137f7a460b146b48dcf0"} err="failed to get container status \"e6e2cdbfb3c22684552c0c0816ef400dd03938fa8cd1137f7a460b146b48dcf0\": rpc error: code = Unknown desc = Error response from daemon: No such container: e6e2cdbfb3c22684552c0c0816ef400dd03938fa8cd1137f7a460b146b48dcf0"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.568684    2304 scope.go:117] "RemoveContainer" containerID="4d6dd138c09675acfb2f6adc73c11d9543d058086e8da1ee5901c6743e55b5e1"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.569407    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4d6dd138c09675acfb2f6adc73c11d9543d058086e8da1ee5901c6743e55b5e1"} err="failed to get container status \"4d6dd138c09675acfb2f6adc73c11d9543d058086e8da1ee5901c6743e55b5e1\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4d6dd138c09675acfb2f6adc73c11d9543d058086e8da1ee5901c6743e55b5e1"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.569449    2304 scope.go:117] "RemoveContainer" containerID="efa473f4f7005e58a24cc27f72ebb28e2b325a5b5d133406128503f7cc8347d1"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.570119    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"efa473f4f7005e58a24cc27f72ebb28e2b325a5b5d133406128503f7cc8347d1"} err="failed to get container status \"efa473f4f7005e58a24cc27f72ebb28e2b325a5b5d133406128503f7cc8347d1\": rpc error: code = Unknown desc = Error response from daemon: No such container: efa473f4f7005e58a24cc27f72ebb28e2b325a5b5d133406128503f7cc8347d1"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.570165    2304 scope.go:117] "RemoveContainer" containerID="a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.570877    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa"} err="failed to get container status \"a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa\": rpc error: code = Unknown desc = Error response from daemon: No such container: a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.570927    2304 scope.go:117] "RemoveContainer" containerID="c932584529b6a02ab9e5dd3691e3d45db369ae9142e2f5ea067ce8adf498630c"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.571628    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"c932584529b6a02ab9e5dd3691e3d45db369ae9142e2f5ea067ce8adf498630c"} err="failed to get container status \"c932584529b6a02ab9e5dd3691e3d45db369ae9142e2f5ea067ce8adf498630c\": rpc error: code = Unknown desc = Error response from daemon: No such container: c932584529b6a02ab9e5dd3691e3d45db369ae9142e2f5ea067ce8adf498630c"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.571647    2304 scope.go:117] "RemoveContainer" containerID="8319f57dd4a0119bdfb765190a89bbf9eb6c3c48dba7718751a92e565dd0c9d7"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.572387    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8319f57dd4a0119bdfb765190a89bbf9eb6c3c48dba7718751a92e565dd0c9d7"} err="failed to get container status \"8319f57dd4a0119bdfb765190a89bbf9eb6c3c48dba7718751a92e565dd0c9d7\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8319f57dd4a0119bdfb765190a89bbf9eb6c3c48dba7718751a92e565dd0c9d7"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.572433    2304 scope.go:117] "RemoveContainer" containerID="e6e2cdbfb3c22684552c0c0816ef400dd03938fa8cd1137f7a460b146b48dcf0"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.573157    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e6e2cdbfb3c22684552c0c0816ef400dd03938fa8cd1137f7a460b146b48dcf0"} err="failed to get container status \"e6e2cdbfb3c22684552c0c0816ef400dd03938fa8cd1137f7a460b146b48dcf0\": rpc error: code = Unknown desc = Error response from daemon: No such container: e6e2cdbfb3c22684552c0c0816ef400dd03938fa8cd1137f7a460b146b48dcf0"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.573180    2304 scope.go:117] "RemoveContainer" containerID="4d6dd138c09675acfb2f6adc73c11d9543d058086e8da1ee5901c6743e55b5e1"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.573863    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4d6dd138c09675acfb2f6adc73c11d9543d058086e8da1ee5901c6743e55b5e1"} err="failed to get container status \"4d6dd138c09675acfb2f6adc73c11d9543d058086e8da1ee5901c6743e55b5e1\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4d6dd138c09675acfb2f6adc73c11d9543d058086e8da1ee5901c6743e55b5e1"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.573888    2304 scope.go:117] "RemoveContainer" containerID="efa473f4f7005e58a24cc27f72ebb28e2b325a5b5d133406128503f7cc8347d1"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.574567    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"efa473f4f7005e58a24cc27f72ebb28e2b325a5b5d133406128503f7cc8347d1"} err="failed to get container status \"efa473f4f7005e58a24cc27f72ebb28e2b325a5b5d133406128503f7cc8347d1\": rpc error: code = Unknown desc = Error response from daemon: No such container: efa473f4f7005e58a24cc27f72ebb28e2b325a5b5d133406128503f7cc8347d1"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.574592    2304 scope.go:117] "RemoveContainer" containerID="a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa"
	Feb 14 03:02:09 addons-565438 kubelet[2304]: I0214 03:02:09.575314    2304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa"} err="failed to get container status \"a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa\": rpc error: code = Unknown desc = Error response from daemon: No such container: a1272fa14c8b2264f65c8df474bb9c3647b78fd72f8fd2004569d20ae9123eaa"
	
	
	==> storage-provisioner [427f050089e9] <==
	I0214 02:59:31.779918       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 02:59:31.811201       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 02:59:31.811246       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 02:59:31.820709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 02:59:31.820918       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-565438_48704839-82c9-4eb2-910f-ea8f0ec5a21d!
	I0214 02:59:31.821940       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dea4603d-89e1-47b6-bc14-4d629187f69b", APIVersion:"v1", ResourceVersion:"538", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-565438_48704839-82c9-4eb2-910f-ea8f0ec5a21d became leader
	I0214 02:59:31.952244       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-565438_48704839-82c9-4eb2-910f-ea8f0ec5a21d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-565438 -n addons-565438
helpers_test.go:261: (dbg) Run:  kubectl --context addons-565438 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (37.23s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (57.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-642069 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-642069 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.085363998s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-642069 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-642069 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ba56b700-a8a1-47ec-a75c-8240bd5fbb7f] Pending
helpers_test.go:344: "nginx" [ba56b700-a8a1-47ec-a75c-8240bd5fbb7f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ba56b700-a8a1-47ec-a75c-8240bd5fbb7f] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.003430487s
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-642069 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-642069 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-642069 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:297: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.022170583s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:299: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:303: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:306: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-642069 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-642069 addons disable ingress-dns --alsologtostderr -v=1: (11.05485668s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-642069 addons disable ingress --alsologtostderr -v=1
E0214 03:10:49.096655 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
addons_test.go:311: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-642069 addons disable ingress --alsologtostderr -v=1: (7.472970277s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-642069
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-642069:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f2d760ae6e97b358345359ca49bc602f1255f78ed188188113dca948726c83c7",
	        "Created": "2024-02-14T03:08:47.221324758Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1319488,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:08:47.494208736Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/f2d760ae6e97b358345359ca49bc602f1255f78ed188188113dca948726c83c7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f2d760ae6e97b358345359ca49bc602f1255f78ed188188113dca948726c83c7/hostname",
	        "HostsPath": "/var/lib/docker/containers/f2d760ae6e97b358345359ca49bc602f1255f78ed188188113dca948726c83c7/hosts",
	        "LogPath": "/var/lib/docker/containers/f2d760ae6e97b358345359ca49bc602f1255f78ed188188113dca948726c83c7/f2d760ae6e97b358345359ca49bc602f1255f78ed188188113dca948726c83c7-json.log",
	        "Name": "/ingress-addon-legacy-642069",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-642069:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-642069",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f8dc1157f566f01b1e61bac16884db4ddd224c0ae46cbed1e56c080dccff4dda-init/diff:/var/lib/docker/overlay2/5910aa9960042d82258ed2c744f886c75b60e8845789b5b8e9c74bac81b955ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f8dc1157f566f01b1e61bac16884db4ddd224c0ae46cbed1e56c080dccff4dda/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f8dc1157f566f01b1e61bac16884db4ddd224c0ae46cbed1e56c080dccff4dda/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f8dc1157f566f01b1e61bac16884db4ddd224c0ae46cbed1e56c080dccff4dda/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-642069",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-642069/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-642069",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-642069",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-642069",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4efca3ef78726486ac60c08ce52e50aa60eeb191df6e39eec3504c220a51298e",
	            "SandboxKey": "/var/run/docker/netns/4efca3ef7872",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34073"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34070"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-642069": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "f2d760ae6e97",
	                        "ingress-addon-legacy-642069"
	                    ],
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "aeaac5a0be010ad18f89da9e0e74eda94871f84fb69c2a6d1397da21662e4050",
	                    "EndpointID": "9e583eebdd7f51d0792e38a0172285d40aace5c4fe5171308daa7c0365f59800",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ingress-addon-legacy-642069",
	                        "f2d760ae6e97"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-642069 -n ingress-addon-legacy-642069
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-642069 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-094137 image ls                                               | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:07 UTC |
	| image   | functional-094137 image load                                             | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:07 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-094137 image ls                                               | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:07 UTC |
	| image   | functional-094137 image save --daemon                                    | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:07 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-094137                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-094137                                                        | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:07 UTC |
	|         | image ls --format short                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-094137                                                        | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:07 UTC |
	|         | image ls --format yaml                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| ssh     | functional-094137 ssh pgrep                                              | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC |                     |
	|         | buildkitd                                                                |                             |         |         |                     |                     |
	| image   | functional-094137                                                        | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:07 UTC |
	|         | image ls --format json                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-094137                                                        | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:07 UTC |
	|         | image ls --format table                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| image   | functional-094137 image build -t                                         | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:07 UTC |
	|         | localhost/my-image:functional-094137                                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                         |                             |         |         |                     |                     |
	| image   | functional-094137 image ls                                               | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:07 UTC |
	| delete  | -p functional-094137                                                     | functional-094137           | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:07 UTC |
	| start   | -p image-105003                                                          | image-105003                | jenkins | v1.32.0 | 14 Feb 24 03:07 UTC | 14 Feb 24 03:08 UTC |
	|         | --driver=docker                                                          |                             |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-105003                | jenkins | v1.32.0 | 14 Feb 24 03:08 UTC | 14 Feb 24 03:08 UTC |
	|         | ./testdata/image-build/test-normal                                       |                             |         |         |                     |                     |
	|         | -p image-105003                                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-105003                | jenkins | v1.32.0 | 14 Feb 24 03:08 UTC | 14 Feb 24 03:08 UTC |
	|         | --build-opt=build-arg=ENV_A=test_env_str                                 |                             |         |         |                     |                     |
	|         | --build-opt=no-cache                                                     |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                                       |                             |         |         |                     |                     |
	|         | image-105003                                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-105003                | jenkins | v1.32.0 | 14 Feb 24 03:08 UTC | 14 Feb 24 03:08 UTC |
	|         | ./testdata/image-build/test-normal                                       |                             |         |         |                     |                     |
	|         | --build-opt=no-cache -p                                                  |                             |         |         |                     |                     |
	|         | image-105003                                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                                      | image-105003                | jenkins | v1.32.0 | 14 Feb 24 03:08 UTC | 14 Feb 24 03:08 UTC |
	|         | -f inner/Dockerfile                                                      |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-f                                            |                             |         |         |                     |                     |
	|         | -p image-105003                                                          |                             |         |         |                     |                     |
	| delete  | -p image-105003                                                          | image-105003                | jenkins | v1.32.0 | 14 Feb 24 03:08 UTC | 14 Feb 24 03:08 UTC |
	| start   | -p ingress-addon-legacy-642069                                           | ingress-addon-legacy-642069 | jenkins | v1.32.0 | 14 Feb 24 03:08 UTC | 14 Feb 24 03:09 UTC |
	|         | --kubernetes-version=v1.18.20                                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                     |                             |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-642069                                              | ingress-addon-legacy-642069 | jenkins | v1.32.0 | 14 Feb 24 03:09 UTC | 14 Feb 24 03:09 UTC |
	|         | addons enable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-642069                                              | ingress-addon-legacy-642069 | jenkins | v1.32.0 | 14 Feb 24 03:09 UTC | 14 Feb 24 03:09 UTC |
	|         | addons enable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-642069                                              | ingress-addon-legacy-642069 | jenkins | v1.32.0 | 14 Feb 24 03:10 UTC | 14 Feb 24 03:10 UTC |
	|         | ssh curl -s http://127.0.0.1/                                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-642069 ip                                           | ingress-addon-legacy-642069 | jenkins | v1.32.0 | 14 Feb 24 03:10 UTC | 14 Feb 24 03:10 UTC |
	| addons  | ingress-addon-legacy-642069                                              | ingress-addon-legacy-642069 | jenkins | v1.32.0 | 14 Feb 24 03:10 UTC | 14 Feb 24 03:10 UTC |
	|         | addons disable ingress-dns                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-642069                                              | ingress-addon-legacy-642069 | jenkins | v1.32.0 | 14 Feb 24 03:10 UTC | 14 Feb 24 03:10 UTC |
	|         | addons disable ingress                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 03:08:23
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 03:08:23.966668 1319034 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:08:23.966795 1319034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:08:23.966806 1319034 out.go:304] Setting ErrFile to fd 2...
	I0214 03:08:23.966812 1319034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:08:23.967041 1319034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
	I0214 03:08:23.967450 1319034 out.go:298] Setting JSON to false
	I0214 03:08:23.968339 1319034 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":21049,"bootTime":1707859055,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 03:08:23.968417 1319034 start.go:138] virtualization:  
	I0214 03:08:23.971056 1319034 out.go:177] * [ingress-addon-legacy-642069] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 03:08:23.973568 1319034 out.go:177]   - MINIKUBE_LOCATION=18165
	I0214 03:08:23.975343 1319034 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 03:08:23.973699 1319034 notify.go:220] Checking for updates...
	I0214 03:08:23.978873 1319034 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	I0214 03:08:23.980506 1319034 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	I0214 03:08:23.982554 1319034 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 03:08:23.984422 1319034 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 03:08:23.986747 1319034 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 03:08:24.012484 1319034 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 03:08:24.012606 1319034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:08:24.095434 1319034 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:46 SystemTime:2024-02-14 03:08:24.084176799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:08:24.095551 1319034 docker.go:295] overlay module found
	I0214 03:08:24.097750 1319034 out.go:177] * Using the docker driver based on user configuration
	I0214 03:08:24.099386 1319034 start.go:298] selected driver: docker
	I0214 03:08:24.099408 1319034 start.go:902] validating driver "docker" against <nil>
	I0214 03:08:24.099423 1319034 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 03:08:24.100136 1319034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:08:24.162982 1319034 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:46 SystemTime:2024-02-14 03:08:24.153076806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:08:24.163143 1319034 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 03:08:24.163374 1319034 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0214 03:08:24.165591 1319034 out.go:177] * Using Docker driver with root privileges
	I0214 03:08:24.167563 1319034 cni.go:84] Creating CNI manager for ""
	I0214 03:08:24.167588 1319034 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0214 03:08:24.167602 1319034 start_flags.go:321] config:
	{Name:ingress-addon-legacy-642069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-642069 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:08:24.169863 1319034 out.go:177] * Starting control plane node ingress-addon-legacy-642069 in cluster ingress-addon-legacy-642069
	I0214 03:08:24.171742 1319034 cache.go:121] Beginning downloading kic base image for docker with docker
	I0214 03:08:24.173729 1319034 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 03:08:24.175371 1319034 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0214 03:08:24.175465 1319034 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 03:08:24.193932 1319034 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0214 03:08:24.193976 1319034 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0214 03:08:24.256531 1319034 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0214 03:08:24.256558 1319034 cache.go:56] Caching tarball of preloaded images
	I0214 03:08:24.256734 1319034 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0214 03:08:24.258855 1319034 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0214 03:08:24.261182 1319034 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0214 03:08:24.373310 1319034 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0214 03:08:39.951399 1319034 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0214 03:08:39.951506 1319034 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0214 03:08:41.081121 1319034 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0214 03:08:41.081484 1319034 profile.go:148] Saving config to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/config.json ...
	I0214 03:08:41.081518 1319034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/config.json: {Name:mk145173a3f31d6e6a5b9e7a84c3dcc352a93419 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:08:41.081988 1319034 cache.go:194] Successfully downloaded all kic artifacts
	I0214 03:08:41.082027 1319034 start.go:365] acquiring machines lock for ingress-addon-legacy-642069: {Name:mk67446aae5b4fee4ca2bad998226db8d0d49f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 03:08:41.082093 1319034 start.go:369] acquired machines lock for "ingress-addon-legacy-642069" in 49.525µs
	I0214 03:08:41.082117 1319034 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-642069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-642069 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0214 03:08:41.082193 1319034 start.go:125] createHost starting for "" (driver="docker")
	I0214 03:08:41.084197 1319034 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0214 03:08:41.084412 1319034 start.go:159] libmachine.API.Create for "ingress-addon-legacy-642069" (driver="docker")
	I0214 03:08:41.084436 1319034 client.go:168] LocalClient.Create starting
	I0214 03:08:41.084497 1319034 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem
	I0214 03:08:41.084531 1319034 main.go:141] libmachine: Decoding PEM data...
	I0214 03:08:41.084549 1319034 main.go:141] libmachine: Parsing certificate...
	I0214 03:08:41.084610 1319034 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem
	I0214 03:08:41.084632 1319034 main.go:141] libmachine: Decoding PEM data...
	I0214 03:08:41.084645 1319034 main.go:141] libmachine: Parsing certificate...
	I0214 03:08:41.085004 1319034 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-642069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0214 03:08:41.099641 1319034 cli_runner.go:211] docker network inspect ingress-addon-legacy-642069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0214 03:08:41.099734 1319034 network_create.go:281] running [docker network inspect ingress-addon-legacy-642069] to gather additional debugging logs...
	I0214 03:08:41.099756 1319034 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-642069
	W0214 03:08:41.113798 1319034 cli_runner.go:211] docker network inspect ingress-addon-legacy-642069 returned with exit code 1
	I0214 03:08:41.113829 1319034 network_create.go:284] error running [docker network inspect ingress-addon-legacy-642069]: docker network inspect ingress-addon-legacy-642069: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-642069 not found
	I0214 03:08:41.113843 1319034 network_create.go:286] output of [docker network inspect ingress-addon-legacy-642069]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-642069 not found
	
	** /stderr **
	I0214 03:08:41.113945 1319034 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 03:08:41.128228 1319034 network.go:207] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400041dd60}
	I0214 03:08:41.128269 1319034 network_create.go:124] attempt to create docker network ingress-addon-legacy-642069 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0214 03:08:41.128328 1319034 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-642069 ingress-addon-legacy-642069
	I0214 03:08:41.188114 1319034 network_create.go:108] docker network ingress-addon-legacy-642069 192.168.49.0/24 created
	I0214 03:08:41.188147 1319034 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-642069" container
	I0214 03:08:41.188219 1319034 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0214 03:08:41.201440 1319034 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-642069 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-642069 --label created_by.minikube.sigs.k8s.io=true
	I0214 03:08:41.216398 1319034 oci.go:103] Successfully created a docker volume ingress-addon-legacy-642069
	I0214 03:08:41.216488 1319034 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-642069-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-642069 --entrypoint /usr/bin/test -v ingress-addon-legacy-642069:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0214 03:08:42.632090 1319034 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-642069-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-642069 --entrypoint /usr/bin/test -v ingress-addon-legacy-642069:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib: (1.415555346s)
	I0214 03:08:42.632130 1319034 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-642069
	I0214 03:08:42.632155 1319034 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0214 03:08:42.632180 1319034 kic.go:194] Starting extracting preloaded images to volume ...
	I0214 03:08:42.632269 1319034 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-642069:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0214 03:08:47.150018 1319034 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-642069:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.51769885s)
	I0214 03:08:47.150057 1319034 kic.go:203] duration metric: took 4.517875 seconds to extract preloaded images to volume
	W0214 03:08:47.150201 1319034 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0214 03:08:47.150307 1319034 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0214 03:08:47.206599 1319034 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-642069 --name ingress-addon-legacy-642069 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-642069 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-642069 --network ingress-addon-legacy-642069 --ip 192.168.49.2 --volume ingress-addon-legacy-642069:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0214 03:08:47.504435 1319034 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-642069 --format={{.State.Running}}
	I0214 03:08:47.525608 1319034 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-642069 --format={{.State.Status}}
	I0214 03:08:47.543523 1319034 cli_runner.go:164] Run: docker exec ingress-addon-legacy-642069 stat /var/lib/dpkg/alternatives/iptables
	I0214 03:08:47.598597 1319034 oci.go:144] the created container "ingress-addon-legacy-642069" has a running status.
	I0214 03:08:47.598625 1319034 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/ingress-addon-legacy-642069/id_rsa...
	I0214 03:08:47.784211 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/ingress-addon-legacy-642069/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0214 03:08:47.784258 1319034 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/ingress-addon-legacy-642069/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0214 03:08:47.804596 1319034 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-642069 --format={{.State.Status}}
	I0214 03:08:47.826973 1319034 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0214 03:08:47.826998 1319034 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-642069 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0214 03:08:47.892851 1319034 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-642069 --format={{.State.Status}}
	I0214 03:08:47.922219 1319034 machine.go:88] provisioning docker machine ...
	I0214 03:08:47.922254 1319034 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-642069"
	I0214 03:08:47.922322 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:08:47.946924 1319034 main.go:141] libmachine: Using SSH client type: native
	I0214 03:08:47.947358 1319034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34074 <nil> <nil>}
	I0214 03:08:47.947377 1319034 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-642069 && echo "ingress-addon-legacy-642069" | sudo tee /etc/hostname
	I0214 03:08:47.948047 1319034 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59630->127.0.0.1:34074: read: connection reset by peer
	I0214 03:08:51.097157 1319034 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-642069
	
	I0214 03:08:51.097247 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:08:51.118835 1319034 main.go:141] libmachine: Using SSH client type: native
	I0214 03:08:51.119247 1319034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34074 <nil> <nil>}
	I0214 03:08:51.119265 1319034 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-642069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-642069/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-642069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 03:08:51.255619 1319034 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 03:08:51.255674 1319034 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18165-1266022/.minikube CaCertPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18165-1266022/.minikube}
	I0214 03:08:51.255715 1319034 ubuntu.go:177] setting up certificates
	I0214 03:08:51.255725 1319034 provision.go:83] configureAuth start
	I0214 03:08:51.255809 1319034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-642069
	I0214 03:08:51.272243 1319034 provision.go:138] copyHostCerts
	I0214 03:08:51.272290 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18165-1266022/.minikube/key.pem
	I0214 03:08:51.272324 1319034 exec_runner.go:144] found /home/jenkins/minikube-integration/18165-1266022/.minikube/key.pem, removing ...
	I0214 03:08:51.272335 1319034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18165-1266022/.minikube/key.pem
	I0214 03:08:51.272418 1319034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18165-1266022/.minikube/key.pem (1679 bytes)
	I0214 03:08:51.272504 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.pem
	I0214 03:08:51.272528 1319034 exec_runner.go:144] found /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.pem, removing ...
	I0214 03:08:51.272536 1319034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.pem
	I0214 03:08:51.272564 1319034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.pem (1078 bytes)
	I0214 03:08:51.272611 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18165-1266022/.minikube/cert.pem
	I0214 03:08:51.272632 1319034 exec_runner.go:144] found /home/jenkins/minikube-integration/18165-1266022/.minikube/cert.pem, removing ...
	I0214 03:08:51.272641 1319034 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18165-1266022/.minikube/cert.pem
	I0214 03:08:51.272666 1319034 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18165-1266022/.minikube/cert.pem (1123 bytes)
	I0214 03:08:51.272746 1319034 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-642069 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-642069]
	I0214 03:08:52.203420 1319034 provision.go:172] copyRemoteCerts
	I0214 03:08:52.203499 1319034 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 03:08:52.203547 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:08:52.219062 1319034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34074 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/ingress-addon-legacy-642069/id_rsa Username:docker}
	I0214 03:08:52.312799 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0214 03:08:52.312864 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0214 03:08:52.337286 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0214 03:08:52.337352 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0214 03:08:52.361355 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0214 03:08:52.361431 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0214 03:08:52.385304 1319034 provision.go:86] duration metric: configureAuth took 1.129562219s
	I0214 03:08:52.385330 1319034 ubuntu.go:193] setting minikube options for container-runtime
	I0214 03:08:52.385532 1319034 config.go:182] Loaded profile config "ingress-addon-legacy-642069": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0214 03:08:52.385599 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:08:52.401229 1319034 main.go:141] libmachine: Using SSH client type: native
	I0214 03:08:52.401643 1319034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34074 <nil> <nil>}
	I0214 03:08:52.401659 1319034 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0214 03:08:52.532070 1319034 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0214 03:08:52.532134 1319034 ubuntu.go:71] root file system type: overlay
	I0214 03:08:52.532291 1319034 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0214 03:08:52.532406 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:08:52.547809 1319034 main.go:141] libmachine: Using SSH client type: native
	I0214 03:08:52.548243 1319034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34074 <nil> <nil>}
	I0214 03:08:52.548326 1319034 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0214 03:08:52.691828 1319034 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0214 03:08:52.691922 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:08:52.708944 1319034 main.go:141] libmachine: Using SSH client type: native
	I0214 03:08:52.709372 1319034 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34074 <nil> <nil>}
	I0214 03:08:52.709433 1319034 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0214 03:08:53.425872 1319034 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-14 03:08:52.686472230 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0214 03:08:53.425970 1319034 machine.go:91] provisioned docker machine in 5.503721176s
	I0214 03:08:53.426018 1319034 client.go:171] LocalClient.Create took 12.3415739s
	I0214 03:08:53.426053 1319034 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-642069" took 12.341639393s
	I0214 03:08:53.426081 1319034 start.go:300] post-start starting for "ingress-addon-legacy-642069" (driver="docker")
	I0214 03:08:53.426119 1319034 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 03:08:53.426206 1319034 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 03:08:53.426272 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:08:53.441358 1319034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34074 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/ingress-addon-legacy-642069/id_rsa Username:docker}
	I0214 03:08:53.536664 1319034 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 03:08:53.539699 1319034 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 03:08:53.539736 1319034 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 03:08:53.539748 1319034 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 03:08:53.539755 1319034 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 03:08:53.539765 1319034 filesync.go:126] Scanning /home/jenkins/minikube-integration/18165-1266022/.minikube/addons for local assets ...
	I0214 03:08:53.539824 1319034 filesync.go:126] Scanning /home/jenkins/minikube-integration/18165-1266022/.minikube/files for local assets ...
	I0214 03:08:53.539918 1319034 filesync.go:149] local asset: /home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/12713802.pem -> 12713802.pem in /etc/ssl/certs
	I0214 03:08:53.539930 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/12713802.pem -> /etc/ssl/certs/12713802.pem
	I0214 03:08:53.540039 1319034 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 03:08:53.548379 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/12713802.pem --> /etc/ssl/certs/12713802.pem (1708 bytes)
	I0214 03:08:53.571914 1319034 start.go:303] post-start completed in 145.803621ms
	I0214 03:08:53.572313 1319034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-642069
	I0214 03:08:53.587805 1319034 profile.go:148] Saving config to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/config.json ...
	I0214 03:08:53.588090 1319034 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 03:08:53.588147 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:08:53.603388 1319034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34074 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/ingress-addon-legacy-642069/id_rsa Username:docker}
	I0214 03:08:53.692411 1319034 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 03:08:53.696745 1319034 start.go:128] duration metric: createHost completed in 12.614536353s
	I0214 03:08:53.696770 1319034 start.go:83] releasing machines lock for "ingress-addon-legacy-642069", held for 12.614665359s
	I0214 03:08:53.696852 1319034 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-642069
	I0214 03:08:53.712048 1319034 ssh_runner.go:195] Run: cat /version.json
	I0214 03:08:53.712106 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:08:53.712337 1319034 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 03:08:53.712408 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:08:53.730159 1319034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34074 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/ingress-addon-legacy-642069/id_rsa Username:docker}
	I0214 03:08:53.733938 1319034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34074 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/ingress-addon-legacy-642069/id_rsa Username:docker}
	I0214 03:08:53.819040 1319034 ssh_runner.go:195] Run: systemctl --version
	I0214 03:08:53.952399 1319034 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 03:08:53.956522 1319034 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0214 03:08:53.982443 1319034 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0214 03:08:53.982539 1319034 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0214 03:08:54.009059 1319034 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0214 03:08:54.027996 1319034 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0214 03:08:54.028028 1319034 start.go:475] detecting cgroup driver to use...
	I0214 03:08:54.028064 1319034 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 03:08:54.028172 1319034 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 03:08:54.047539 1319034 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0214 03:08:54.059281 1319034 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0214 03:08:54.070390 1319034 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0214 03:08:54.070533 1319034 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0214 03:08:54.080955 1319034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 03:08:54.092314 1319034 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0214 03:08:54.103164 1319034 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 03:08:54.113794 1319034 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 03:08:54.123966 1319034 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0214 03:08:54.134944 1319034 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 03:08:54.144257 1319034 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 03:08:54.153285 1319034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 03:08:54.235479 1319034 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0214 03:08:54.348089 1319034 start.go:475] detecting cgroup driver to use...
	I0214 03:08:54.348185 1319034 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 03:08:54.348263 1319034 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0214 03:08:54.360390 1319034 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0214 03:08:54.360511 1319034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0214 03:08:54.375318 1319034 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 03:08:54.393671 1319034 ssh_runner.go:195] Run: which cri-dockerd
	I0214 03:08:54.397367 1319034 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0214 03:08:54.406691 1319034 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0214 03:08:54.425663 1319034 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0214 03:08:54.520752 1319034 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0214 03:08:54.624580 1319034 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0214 03:08:54.624710 1319034 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0214 03:08:54.644718 1319034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 03:08:54.724077 1319034 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0214 03:08:54.957304 1319034 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0214 03:08:54.978665 1319034 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0214 03:08:55.005995 1319034 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I0214 03:08:55.006160 1319034 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-642069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 03:08:55.026697 1319034 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0214 03:08:55.031102 1319034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 03:08:55.042960 1319034 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0214 03:08:55.043039 1319034 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0214 03:08:55.060881 1319034 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0214 03:08:55.060910 1319034 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0214 03:08:55.060966 1319034 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0214 03:08:55.070033 1319034 ssh_runner.go:195] Run: which lz4
	I0214 03:08:55.073503 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0214 03:08:55.073606 1319034 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0214 03:08:55.077574 1319034 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0214 03:08:55.077612 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0214 03:08:57.169287 1319034 docker.go:649] Took 2.095717 seconds to copy over tarball
	I0214 03:08:57.169378 1319034 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0214 03:08:59.581582 1319034 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.412177949s)
	I0214 03:08:59.581658 1319034 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0214 03:08:59.786460 1319034 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0214 03:08:59.795692 1319034 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0214 03:08:59.812749 1319034 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 03:08:59.904390 1319034 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0214 03:09:02.321816 1319034 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.417394532s)
	I0214 03:09:02.321899 1319034 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0214 03:09:02.339282 1319034 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0214 03:09:02.339302 1319034 docker.go:691] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0214 03:09:02.339311 1319034 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0214 03:09:02.340967 1319034 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0214 03:09:02.341157 1319034 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0214 03:09:02.341296 1319034 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0214 03:09:02.341442 1319034 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 03:09:02.341513 1319034 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0214 03:09:02.341579 1319034 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0214 03:09:02.341732 1319034 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0214 03:09:02.341933 1319034 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:09:02.342027 1319034 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0214 03:09:02.342388 1319034 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0214 03:09:02.342688 1319034 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0214 03:09:02.343163 1319034 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0214 03:09:02.343455 1319034 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:09:02.343813 1319034 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0214 03:09:02.344222 1319034 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 03:09:02.345113 1319034 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	W0214 03:09:02.683772 1319034 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0214 03:09:02.683972 1319034 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0214 03:09:02.689300 1319034 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0214 03:09:02.689637 1319034 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0214 03:09:02.700392 1319034 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0214 03:09:02.704558 1319034 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0214 03:09:02.704602 1319034 docker.go:337] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0214 03:09:02.704652 1319034 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	W0214 03:09:02.709018 1319034 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0214 03:09:02.709182 1319034 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0214 03:09:02.710437 1319034 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0214 03:09:02.710608 1319034 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0214 03:09:02.714179 1319034 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0214 03:09:02.714336 1319034 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 03:09:02.722048 1319034 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0214 03:09:02.722092 1319034 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0214 03:09:02.722148 1319034 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	W0214 03:09:02.741521 1319034 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0214 03:09:02.741687 1319034 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0214 03:09:02.747789 1319034 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0214 03:09:02.747834 1319034 docker.go:337] Removing image: registry.k8s.io/pause:3.2
	I0214 03:09:02.747892 1319034 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0214 03:09:02.767300 1319034 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0214 03:09:02.767355 1319034 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0214 03:09:02.767406 1319034 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0214 03:09:02.768558 1319034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0214 03:09:02.788383 1319034 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0214 03:09:02.788490 1319034 docker.go:337] Removing image: registry.k8s.io/coredns:1.6.7
	I0214 03:09:02.788582 1319034 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0214 03:09:02.805678 1319034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0214 03:09:02.806064 1319034 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0214 03:09:02.806136 1319034 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 03:09:02.806241 1319034 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0214 03:09:02.806832 1319034 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0214 03:09:02.806883 1319034 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0214 03:09:02.806943 1319034 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0214 03:09:02.832353 1319034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0214 03:09:02.832403 1319034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0214 03:09:02.836810 1319034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0214 03:09:02.836853 1319034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0214 03:09:02.847826 1319034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0214 03:09:02.865963 1319034 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0214 03:09:02.866121 1319034 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:09:02.882281 1319034 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0214 03:09:02.882335 1319034 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:09:02.882388 1319034 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:09:02.910448 1319034 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0214 03:09:02.910527 1319034 cache_images.go:92] LoadImages completed in 571.205069ms
	W0214 03:09:02.910601 1319034 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0214 03:09:02.910664 1319034 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0214 03:09:02.961405 1319034 cni.go:84] Creating CNI manager for ""
	I0214 03:09:02.961478 1319034 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0214 03:09:02.961502 1319034 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 03:09:02.961524 1319034 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-642069 NodeName:ingress-addon-legacy-642069 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0214 03:09:02.961673 1319034 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-642069"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 03:09:02.961740 1319034 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-642069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-642069 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0214 03:09:02.961810 1319034 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0214 03:09:02.970442 1319034 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 03:09:02.970534 1319034 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 03:09:02.978857 1319034 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0214 03:09:02.995956 1319034 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0214 03:09:03.016263 1319034 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0214 03:09:03.035953 1319034 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0214 03:09:03.039822 1319034 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 03:09:03.051174 1319034 certs.go:56] Setting up /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069 for IP: 192.168.49.2
	I0214 03:09:03.051211 1319034 certs.go:190] acquiring lock for shared ca certs: {Name:mk38eec77f10b2e9943b70dec5fadf9f48ce78cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:09:03.051352 1319034 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.key
	I0214 03:09:03.051396 1319034 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.key
	I0214 03:09:03.051441 1319034 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.key
	I0214 03:09:03.051454 1319034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt with IP's: []
	I0214 03:09:03.540609 1319034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt ...
	I0214 03:09:03.540686 1319034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: {Name:mk8563cd9be4ec31f4eb21b12f73a44bb53557a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:09:03.540893 1319034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.key ...
	I0214 03:09:03.540909 1319034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.key: {Name:mk02819c48fc22d33a7ccbc77e98c1214e4a83b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:09:03.541007 1319034 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.key.dd3b5fb2
	I0214 03:09:03.541026 1319034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0214 03:09:03.825770 1319034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.crt.dd3b5fb2 ...
	I0214 03:09:03.825802 1319034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.crt.dd3b5fb2: {Name:mkb84e0f269d2f310f02eab8c17388e749e21b36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:09:03.825982 1319034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.key.dd3b5fb2 ...
	I0214 03:09:03.825995 1319034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.key.dd3b5fb2: {Name:mk3c456d3d287ad8ac99f70253fe61f438bcf764 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:09:03.826075 1319034 certs.go:337] copying /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.crt
	I0214 03:09:03.826162 1319034 certs.go:341] copying /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.key
	I0214 03:09:03.826225 1319034 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/proxy-client.key
	I0214 03:09:03.826242 1319034 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/proxy-client.crt with IP's: []
	I0214 03:09:04.425984 1319034 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/proxy-client.crt ...
	I0214 03:09:04.426017 1319034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/proxy-client.crt: {Name:mkfe64d673dbfa0cca033d5188356121a350d55f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:09:04.426216 1319034 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/proxy-client.key ...
	I0214 03:09:04.426231 1319034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/proxy-client.key: {Name:mk23c3fe8a4e075fca3bd7bba137b6703695cf7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:09:04.426313 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0214 03:09:04.426334 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0214 03:09:04.426346 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0214 03:09:04.426359 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0214 03:09:04.426373 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0214 03:09:04.426385 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0214 03:09:04.426400 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0214 03:09:04.426421 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0214 03:09:04.426485 1319034 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/1271380.pem (1338 bytes)
	W0214 03:09:04.426525 1319034 certs.go:433] ignoring /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/1271380_empty.pem, impossibly tiny 0 bytes
	I0214 03:09:04.426540 1319034 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 03:09:04.426569 1319034 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem (1078 bytes)
	I0214 03:09:04.426600 1319034 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem (1123 bytes)
	I0214 03:09:04.426630 1319034 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/key.pem (1679 bytes)
	I0214 03:09:04.426686 1319034 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/12713802.pem (1708 bytes)
	I0214 03:09:04.426718 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/12713802.pem -> /usr/share/ca-certificates/12713802.pem
	I0214 03:09:04.426734 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:09:04.426745 1319034 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/1271380.pem -> /usr/share/ca-certificates/1271380.pem
	I0214 03:09:04.427299 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 03:09:04.451521 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0214 03:09:04.474835 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 03:09:04.497572 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0214 03:09:04.520239 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 03:09:04.543217 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 03:09:04.566507 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 03:09:04.589453 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 03:09:04.612775 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/12713802.pem --> /usr/share/ca-certificates/12713802.pem (1708 bytes)
	I0214 03:09:04.635439 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 03:09:04.658955 1319034 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/1271380.pem --> /usr/share/ca-certificates/1271380.pem (1338 bytes)
	I0214 03:09:04.681658 1319034 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 03:09:04.699551 1319034 ssh_runner.go:195] Run: openssl version
	I0214 03:09:04.704915 1319034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 03:09:04.713854 1319034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:09:04.717156 1319034 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:58 /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:09:04.717221 1319034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:09:04.723930 1319034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 03:09:04.732755 1319034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1271380.pem && ln -fs /usr/share/ca-certificates/1271380.pem /etc/ssl/certs/1271380.pem"
	I0214 03:09:04.741598 1319034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1271380.pem
	I0214 03:09:04.744951 1319034 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 03:03 /usr/share/ca-certificates/1271380.pem
	I0214 03:09:04.745063 1319034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1271380.pem
	I0214 03:09:04.751827 1319034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1271380.pem /etc/ssl/certs/51391683.0"
	I0214 03:09:04.760738 1319034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12713802.pem && ln -fs /usr/share/ca-certificates/12713802.pem /etc/ssl/certs/12713802.pem"
	I0214 03:09:04.769860 1319034 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12713802.pem
	I0214 03:09:04.773259 1319034 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 03:03 /usr/share/ca-certificates/12713802.pem
	I0214 03:09:04.773325 1319034 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12713802.pem
	I0214 03:09:04.780215 1319034 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12713802.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 03:09:04.789698 1319034 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 03:09:04.793617 1319034 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0214 03:09:04.793709 1319034 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-642069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-642069 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:09:04.793858 1319034 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0214 03:09:04.811174 1319034 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 03:09:04.819849 1319034 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 03:09:04.828348 1319034 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0214 03:09:04.828463 1319034 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 03:09:04.837413 1319034 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 03:09:04.837459 1319034 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0214 03:09:04.894250 1319034 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0214 03:09:04.894341 1319034 kubeadm.go:322] [preflight] Running pre-flight checks
	I0214 03:09:05.086762 1319034 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0214 03:09:05.086841 1319034 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0214 03:09:05.086893 1319034 kubeadm.go:322] DOCKER_VERSION: 24.0.7
	I0214 03:09:05.086931 1319034 kubeadm.go:322] OS: Linux
	I0214 03:09:05.086977 1319034 kubeadm.go:322] CGROUPS_CPU: enabled
	I0214 03:09:05.087027 1319034 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0214 03:09:05.087075 1319034 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0214 03:09:05.087124 1319034 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0214 03:09:05.087174 1319034 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0214 03:09:05.087223 1319034 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0214 03:09:05.175890 1319034 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 03:09:05.176146 1319034 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 03:09:05.176265 1319034 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 03:09:05.354640 1319034 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 03:09:05.356509 1319034 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 03:09:05.356731 1319034 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0214 03:09:05.460007 1319034 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 03:09:05.463006 1319034 out.go:204]   - Generating certificates and keys ...
	I0214 03:09:05.463096 1319034 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0214 03:09:05.463162 1319034 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0214 03:09:05.686274 1319034 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 03:09:05.952484 1319034 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0214 03:09:06.348712 1319034 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0214 03:09:06.992469 1319034 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0214 03:09:07.211139 1319034 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0214 03:09:07.211443 1319034 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-642069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 03:09:07.868425 1319034 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0214 03:09:07.868818 1319034 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-642069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0214 03:09:08.609038 1319034 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 03:09:08.960598 1319034 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 03:09:09.212856 1319034 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0214 03:09:09.212964 1319034 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 03:09:09.400217 1319034 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 03:09:09.684190 1319034 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 03:09:09.986759 1319034 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 03:09:10.467600 1319034 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 03:09:10.471499 1319034 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 03:09:10.473997 1319034 out.go:204]   - Booting up control plane ...
	I0214 03:09:10.474117 1319034 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 03:09:10.481818 1319034 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 03:09:10.481914 1319034 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 03:09:10.482005 1319034 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 03:09:10.482173 1319034 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 03:09:23.485079 1319034 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.003286 seconds
	I0214 03:09:23.485194 1319034 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 03:09:23.498376 1319034 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 03:09:24.018221 1319034 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 03:09:24.018386 1319034 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-642069 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0214 03:09:24.527048 1319034 kubeadm.go:322] [bootstrap-token] Using token: fu1z3d.f2syxb238fuxpn81
	I0214 03:09:24.528970 1319034 out.go:204]   - Configuring RBAC rules ...
	I0214 03:09:24.529090 1319034 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 03:09:24.535313 1319034 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 03:09:24.545727 1319034 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 03:09:24.551811 1319034 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 03:09:24.554952 1319034 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 03:09:24.558160 1319034 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 03:09:24.572117 1319034 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 03:09:24.847162 1319034 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0214 03:09:24.953747 1319034 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0214 03:09:24.954832 1319034 kubeadm.go:322] 
	I0214 03:09:24.954905 1319034 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0214 03:09:24.954917 1319034 kubeadm.go:322] 
	I0214 03:09:24.954989 1319034 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0214 03:09:24.954997 1319034 kubeadm.go:322] 
	I0214 03:09:24.955022 1319034 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0214 03:09:24.955080 1319034 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 03:09:24.955133 1319034 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 03:09:24.955145 1319034 kubeadm.go:322] 
	I0214 03:09:24.955194 1319034 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0214 03:09:24.955266 1319034 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 03:09:24.955351 1319034 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 03:09:24.955364 1319034 kubeadm.go:322] 
	I0214 03:09:24.955442 1319034 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 03:09:24.955517 1319034 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0214 03:09:24.955525 1319034 kubeadm.go:322] 
	I0214 03:09:24.955603 1319034 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fu1z3d.f2syxb238fuxpn81 \
	I0214 03:09:24.955776 1319034 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:09ff10076c39b3ceb6e5878c674bc11df5aa3639e198c3f6e30e096135e90185 \
	I0214 03:09:24.955802 1319034 kubeadm.go:322]     --control-plane 
	I0214 03:09:24.955811 1319034 kubeadm.go:322] 
	I0214 03:09:24.955890 1319034 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0214 03:09:24.955899 1319034 kubeadm.go:322] 
	I0214 03:09:24.955975 1319034 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fu1z3d.f2syxb238fuxpn81 \
	I0214 03:09:24.956076 1319034 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:09ff10076c39b3ceb6e5878c674bc11df5aa3639e198c3f6e30e096135e90185 
	I0214 03:09:24.959321 1319034 kubeadm.go:322] W0214 03:09:04.893658    1650 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0214 03:09:24.959498 1319034 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0214 03:09:24.959619 1319034 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I0214 03:09:24.959836 1319034 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0214 03:09:24.959933 1319034 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 03:09:24.960065 1319034 kubeadm.go:322] W0214 03:09:10.477373    1650 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0214 03:09:24.960190 1319034 kubeadm.go:322] W0214 03:09:10.478572    1650 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0214 03:09:24.960211 1319034 cni.go:84] Creating CNI manager for ""
	I0214 03:09:24.960228 1319034 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0214 03:09:24.960245 1319034 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 03:09:24.960371 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:24.960438 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a5eca87e70081d242c0fa2e2466e3725e217444d minikube.k8s.io/name=ingress-addon-legacy-642069 minikube.k8s.io/updated_at=2024_02_14T03_09_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:25.446671 1319034 ops.go:34] apiserver oom_adj: -16
	I0214 03:09:25.446778 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:25.947865 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:26.447424 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:26.947690 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:27.447553 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:27.947419 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:28.447895 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:28.946927 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:29.447394 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:29.946929 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:30.446922 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:30.947864 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:31.447799 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:31.947865 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:32.447316 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:32.947290 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:33.447439 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:33.947261 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:34.447389 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:34.947625 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:35.447116 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:35.947759 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:36.447759 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:36.947618 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:37.447212 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:37.947645 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:38.447691 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:38.947736 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:39.446848 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:39.946941 1319034 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:09:40.059053 1319034 kubeadm.go:1088] duration metric: took 15.098723851s to wait for elevateKubeSystemPrivileges.
	I0214 03:09:40.059086 1319034 kubeadm.go:406] StartCluster complete in 35.26538527s
	I0214 03:09:40.059107 1319034 settings.go:142] acquiring lock: {Name:mka5ccfc6e6b301490609b4401d47e44477d3784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:09:40.059173 1319034 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18165-1266022/kubeconfig
	I0214 03:09:40.059945 1319034 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/kubeconfig: {Name:mk66f7cad9af599b8ab92f8fcd3383675b5457c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:09:40.060743 1319034 kapi.go:59] client config for ingress-addon-legacy-642069: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt", KeyFile:"/home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.key", CAFile:"/home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 03:09:40.061839 1319034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 03:09:40.062120 1319034 config.go:182] Loaded profile config "ingress-addon-legacy-642069": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0214 03:09:40.062195 1319034 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0214 03:09:40.062262 1319034 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-642069"
	I0214 03:09:40.062278 1319034 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-642069"
	I0214 03:09:40.062325 1319034 host.go:66] Checking if "ingress-addon-legacy-642069" exists ...
	I0214 03:09:40.062795 1319034 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-642069 --format={{.State.Status}}
	I0214 03:09:40.063382 1319034 cert_rotation.go:137] Starting client certificate rotation controller
	I0214 03:09:40.063425 1319034 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-642069"
	I0214 03:09:40.063444 1319034 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-642069"
	I0214 03:09:40.063767 1319034 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-642069 --format={{.State.Status}}
	I0214 03:09:40.107756 1319034 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:09:40.110999 1319034 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 03:09:40.111026 1319034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 03:09:40.111102 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:09:40.118181 1319034 kapi.go:59] client config for ingress-addon-legacy-642069: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt", KeyFile:"/home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.key", CAFile:"/home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 03:09:40.118465 1319034 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-642069"
	I0214 03:09:40.118493 1319034 host.go:66] Checking if "ingress-addon-legacy-642069" exists ...
	I0214 03:09:40.118971 1319034 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-642069 --format={{.State.Status}}
	I0214 03:09:40.162914 1319034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34074 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/ingress-addon-legacy-642069/id_rsa Username:docker}
	I0214 03:09:40.182090 1319034 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 03:09:40.182112 1319034 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 03:09:40.182199 1319034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-642069
	I0214 03:09:40.226531 1319034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34074 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/ingress-addon-legacy-642069/id_rsa Username:docker}
	I0214 03:09:40.394323 1319034 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 03:09:40.397738 1319034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 03:09:40.530596 1319034 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 03:09:40.564462 1319034 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-642069" context rescaled to 1 replicas
	I0214 03:09:40.564510 1319034 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0214 03:09:40.566905 1319034 out.go:177] * Verifying Kubernetes components...
	I0214 03:09:40.569292 1319034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:09:40.871645 1319034 start.go:929] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0214 03:09:41.092009 1319034 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0214 03:09:41.089579 1319034 kapi.go:59] client config for ingress-addon-legacy-642069: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt", KeyFile:"/home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.key", CAFile:"/home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c7bb0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0214 03:09:41.094173 1319034 addons.go:505] enable addons completed in 1.031972315s: enabled=[default-storageclass storage-provisioner]
	I0214 03:09:41.094482 1319034 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-642069" to be "Ready" ...
	I0214 03:09:41.099508 1319034 node_ready.go:49] node "ingress-addon-legacy-642069" has status "Ready":"True"
	I0214 03:09:41.099532 1319034 node_ready.go:38] duration metric: took 5.026046ms waiting for node "ingress-addon-legacy-642069" to be "Ready" ...
	I0214 03:09:41.099544 1319034 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 03:09:41.106987 1319034 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-zrt2t" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:43.116725 1319034 pod_ready.go:102] pod "coredns-66bff467f8-zrt2t" in "kube-system" namespace has status "Ready":"False"
	I0214 03:09:45.612901 1319034 pod_ready.go:102] pod "coredns-66bff467f8-zrt2t" in "kube-system" namespace has status "Ready":"False"
	I0214 03:09:47.114708 1319034 pod_ready.go:92] pod "coredns-66bff467f8-zrt2t" in "kube-system" namespace has status "Ready":"True"
	I0214 03:09:47.114734 1319034 pod_ready.go:81] duration metric: took 6.007715802s waiting for pod "coredns-66bff467f8-zrt2t" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:47.114747 1319034 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-642069" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:47.125339 1319034 pod_ready.go:92] pod "etcd-ingress-addon-legacy-642069" in "kube-system" namespace has status "Ready":"True"
	I0214 03:09:47.125410 1319034 pod_ready.go:81] duration metric: took 10.644889ms waiting for pod "etcd-ingress-addon-legacy-642069" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:47.125435 1319034 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-642069" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:47.130673 1319034 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-642069" in "kube-system" namespace has status "Ready":"True"
	I0214 03:09:47.130699 1319034 pod_ready.go:81] duration metric: took 5.240834ms waiting for pod "kube-apiserver-ingress-addon-legacy-642069" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:47.130710 1319034 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-642069" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:47.135302 1319034 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-642069" in "kube-system" namespace has status "Ready":"True"
	I0214 03:09:47.135331 1319034 pod_ready.go:81] duration metric: took 4.612387ms waiting for pod "kube-controller-manager-ingress-addon-legacy-642069" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:47.135344 1319034 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r4q9h" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:47.140360 1319034 pod_ready.go:92] pod "kube-proxy-r4q9h" in "kube-system" namespace has status "Ready":"True"
	I0214 03:09:47.140389 1319034 pod_ready.go:81] duration metric: took 5.037442ms waiting for pod "kube-proxy-r4q9h" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:47.140409 1319034 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-642069" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:47.308835 1319034 request.go:629] Waited for 168.317863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-642069
	I0214 03:09:47.508733 1319034 request.go:629] Waited for 197.354701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-642069
	I0214 03:09:47.511305 1319034 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-642069" in "kube-system" namespace has status "Ready":"True"
	I0214 03:09:47.511330 1319034 pod_ready.go:81] duration metric: took 370.912431ms waiting for pod "kube-scheduler-ingress-addon-legacy-642069" in "kube-system" namespace to be "Ready" ...
	I0214 03:09:47.511343 1319034 pod_ready.go:38] duration metric: took 6.411789321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0214 03:09:47.511384 1319034 api_server.go:52] waiting for apiserver process to appear ...
	I0214 03:09:47.511461 1319034 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 03:09:47.522620 1319034 api_server.go:72] duration metric: took 6.958077941s to wait for apiserver process to appear ...
	I0214 03:09:47.522646 1319034 api_server.go:88] waiting for apiserver healthz status ...
	I0214 03:09:47.522667 1319034 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0214 03:09:47.531455 1319034 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0214 03:09:47.532530 1319034 api_server.go:141] control plane version: v1.18.20
	I0214 03:09:47.532555 1319034 api_server.go:131] duration metric: took 9.901531ms to wait for apiserver health ...
	I0214 03:09:47.532563 1319034 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 03:09:47.708833 1319034 request.go:629] Waited for 176.199041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0214 03:09:47.714201 1319034 system_pods.go:59] 7 kube-system pods found
	I0214 03:09:47.714304 1319034 system_pods.go:61] "coredns-66bff467f8-zrt2t" [8b331ed7-39b6-41dd-862a-e75f90d7d87f] Running
	I0214 03:09:47.714329 1319034 system_pods.go:61] "etcd-ingress-addon-legacy-642069" [d2392845-ba04-4d23-8752-48b940a02c17] Running
	I0214 03:09:47.714336 1319034 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-642069" [34d29330-3b76-486c-b0c0-344346933bad] Running
	I0214 03:09:47.714346 1319034 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-642069" [7239f6f8-7dc1-488e-b211-be3666906dca] Running
	I0214 03:09:47.714354 1319034 system_pods.go:61] "kube-proxy-r4q9h" [e2dd1824-1cbd-4b61-9592-3d670e0bf391] Running
	I0214 03:09:47.714359 1319034 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-642069" [2c08b077-b6eb-45b3-bb6c-656c0a6e85f5] Running
	I0214 03:09:47.714364 1319034 system_pods.go:61] "storage-provisioner" [a65a5f01-9c6e-43df-ab50-abe4581a79db] Running
	I0214 03:09:47.714374 1319034 system_pods.go:74] duration metric: took 181.804682ms to wait for pod list to return data ...
	I0214 03:09:47.714390 1319034 default_sa.go:34] waiting for default service account to be created ...
	I0214 03:09:47.908822 1319034 request.go:629] Waited for 194.330515ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0214 03:09:47.911121 1319034 default_sa.go:45] found service account: "default"
	I0214 03:09:47.911146 1319034 default_sa.go:55] duration metric: took 196.748801ms for default service account to be created ...
	I0214 03:09:47.911156 1319034 system_pods.go:116] waiting for k8s-apps to be running ...
	I0214 03:09:48.108564 1319034 request.go:629] Waited for 197.342788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0214 03:09:48.114150 1319034 system_pods.go:86] 7 kube-system pods found
	I0214 03:09:48.114184 1319034 system_pods.go:89] "coredns-66bff467f8-zrt2t" [8b331ed7-39b6-41dd-862a-e75f90d7d87f] Running
	I0214 03:09:48.114192 1319034 system_pods.go:89] "etcd-ingress-addon-legacy-642069" [d2392845-ba04-4d23-8752-48b940a02c17] Running
	I0214 03:09:48.114198 1319034 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-642069" [34d29330-3b76-486c-b0c0-344346933bad] Running
	I0214 03:09:48.114206 1319034 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-642069" [7239f6f8-7dc1-488e-b211-be3666906dca] Running
	I0214 03:09:48.114213 1319034 system_pods.go:89] "kube-proxy-r4q9h" [e2dd1824-1cbd-4b61-9592-3d670e0bf391] Running
	I0214 03:09:48.114218 1319034 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-642069" [2c08b077-b6eb-45b3-bb6c-656c0a6e85f5] Running
	I0214 03:09:48.114224 1319034 system_pods.go:89] "storage-provisioner" [a65a5f01-9c6e-43df-ab50-abe4581a79db] Running
	I0214 03:09:48.114233 1319034 system_pods.go:126] duration metric: took 203.071404ms to wait for k8s-apps to be running ...
	I0214 03:09:48.114240 1319034 system_svc.go:44] waiting for kubelet service to be running ....
	I0214 03:09:48.114311 1319034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:09:48.129467 1319034 system_svc.go:56] duration metric: took 15.214963ms WaitForService to wait for kubelet.
	I0214 03:09:48.129493 1319034 kubeadm.go:581] duration metric: took 7.564958494s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0214 03:09:48.129513 1319034 node_conditions.go:102] verifying NodePressure condition ...
	I0214 03:09:48.309050 1319034 request.go:629] Waited for 179.46564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0214 03:09:48.311859 1319034 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 03:09:48.311894 1319034 node_conditions.go:123] node cpu capacity is 2
	I0214 03:09:48.311907 1319034 node_conditions.go:105] duration metric: took 182.387585ms to run NodePressure ...
	I0214 03:09:48.311919 1319034 start.go:228] waiting for startup goroutines ...
	I0214 03:09:48.311926 1319034 start.go:233] waiting for cluster config update ...
	I0214 03:09:48.311936 1319034 start.go:242] writing updated cluster config ...
	I0214 03:09:48.312223 1319034 ssh_runner.go:195] Run: rm -f paused
	I0214 03:09:48.366022 1319034 start.go:600] kubectl: 1.29.1, cluster: 1.18.20 (minor skew: 11)
	I0214 03:09:48.368379 1319034 out.go:177] 
	W0214 03:09:48.370291 1319034 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0214 03:09:48.372104 1319034 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0214 03:09:48.373960 1319034 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-642069" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 14 03:09:02 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:09:02.299778620Z" level=info msg="Daemon has completed initialization"
	Feb 14 03:09:02 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:09:02.319941451Z" level=info msg="API listen on [::]:2376"
	Feb 14 03:09:02 ingress-addon-legacy-642069 systemd[1]: Started Docker Application Container Engine.
	Feb 14 03:09:02 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:09:02.321750889Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 03:09:50 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:09:50.132887059Z" level=warning msg="reference for unknown type: " digest="sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" remote="docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7"
	Feb 14 03:09:51 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:09:51.658031671Z" level=info msg="ignoring event" container=cfe6bb23c212101f1780e424b95611595086ceb90b84b7eb760ec8542d2e1d06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:09:51 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:09:51.702794391Z" level=info msg="ignoring event" container=0c7bad44c6eb5143bf8d40874a37987f5916d7042ba4304a5800ebe32a76528b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:09:52 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:09:52.200577970Z" level=info msg="ignoring event" container=5c3925449f8833bf36afc550291a2612dd9b140dd62a15a47ede8820e1ed05d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:09:52 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:09:52.351641876Z" level=info msg="ignoring event" container=0900f0ca1f526ea45121fb877090c0a4788e64095ba7e6e999bf72f05d7bfbe2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:09:53 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:09:53.224688661Z" level=info msg="ignoring event" container=c0d38ff93a6979450882c680f8f8c7e18c69e4c7c879b2c3fd6c06a2a6be5fd2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:09:53 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:09:53.558257204Z" level=warning msg="reference for unknown type: " digest="sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324" remote="registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324"
	Feb 14 03:10:00 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:00.284848723Z" level=warning msg="Published ports are discarded when using host network mode"
	Feb 14 03:10:00 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:00.353285695Z" level=warning msg="Published ports are discarded when using host network mode"
	Feb 14 03:10:00 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:00.520941646Z" level=warning msg="reference for unknown type: " digest="sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" remote="docker.io/cryptexlabs/minikube-ingress-dns@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"
	Feb 14 03:10:06 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:06.938818020Z" level=info msg="ignoring event" container=d02ddc96f8ab0947ad0ea93f60f16f7a3475827bcae227b3cf55731cf4caf3de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:10:07 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:07.481458901Z" level=info msg="ignoring event" container=fc25298caebafe662b1cc2da1dadebb14ae7c22f699ba5355e8c9b5528f85229 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:10:22 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:22.580229534Z" level=info msg="ignoring event" container=4482c41cbe6f4ac1ffb5397d0d733b5899bc582908cc387ea78ef35ee876ae26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:10:24 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:24.826630884Z" level=info msg="ignoring event" container=3f3c3025343c6ab931bb954f51da8b4d89c5a30752f964fff4946c5108419c1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:10:25 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:25.743326744Z" level=info msg="ignoring event" container=0a33f8f7f65f16a051289c1ea652f45827bc5ebaa9a13e45ec041a1f9747c762 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:10:38 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:38.465382937Z" level=info msg="ignoring event" container=1c57c07bedbaa91dc0caf80e7a3de566c6d318195e8e72472359b34c6d468915 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:10:42 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:42.602747082Z" level=info msg="ignoring event" container=93eea55105af9a35aa136eb258f2f4b477534a7122785d2fe4f2c10cec02933b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:10:51 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:51.320957496Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=e5295e328ccf52ac6fb16a2abdc45f68c43383a0197e7542ac0d8a0664055000
	Feb 14 03:10:51 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:51.334496622Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=e5295e328ccf52ac6fb16a2abdc45f68c43383a0197e7542ac0d8a0664055000
	Feb 14 03:10:51 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:51.405786835Z" level=info msg="ignoring event" container=e5295e328ccf52ac6fb16a2abdc45f68c43383a0197e7542ac0d8a0664055000 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 14 03:10:51 ingress-addon-legacy-642069 dockerd[1296]: time="2024-02-14T03:10:51.462013374Z" level=info msg="ignoring event" container=5afc8526fd15571bc7e7d67f42f3bbb705e388d5c8e8250ed54ecb63fd7db938 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	93eea55105af9       dd1b12fcb6097                                                                                                      14 seconds ago       Exited              hello-world-app           2                   f02ecbe2b1b26       hello-world-app-5f5d8b66bb-w97ll
	ca6aa9be26ab6       nginx@sha256:f2802c2a9d09c7aa3ace27445dfc5656ff24355da28e7b958074a0111e3fc076                                      42 seconds ago       Running             nginx                     0                   d624d4e7d7bed       nginx
	e5295e328ccf5       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   59 seconds ago       Exited              controller                0                   5afc8526fd155       ingress-nginx-controller-7fcf777cb7-l2rkw
	0900f0ca1f526       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   c0d38ff93a697       ingress-nginx-admission-patch-dld74
	0c7bad44c6eb5       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   5c3925449f883       ingress-nginx-admission-create-6fqkp
	da30e6a335c0b       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   4482e3180eeb4       storage-provisioner
	e1fad8568cde5       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   cc22ad0dd954d       coredns-66bff467f8-zrt2t
	548bcdc299f8e       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   5081c25beb3e0       kube-proxy-r4q9h
	05be0f576908c       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   9a75834c330d9       etcd-ingress-addon-legacy-642069
	ded2650aacb90       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   32adf45a596d1       kube-scheduler-ingress-addon-legacy-642069
	a689db9495ffb       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   9e5c031214948       kube-controller-manager-ingress-addon-legacy-642069
	32f8140360306       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   c60ca70b6e986       kube-apiserver-ingress-addon-legacy-642069
	
	
	==> coredns [e1fad8568cde] <==
	[INFO] 172.17.0.1:48996 - 15369 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038013s
	[INFO] 172.17.0.1:63625 - 21081 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000072367s
	[INFO] 172.17.0.1:48996 - 23851 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026781s
	[INFO] 172.17.0.1:63625 - 64746 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006756s
	[INFO] 172.17.0.1:48996 - 45930 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070341s
	[INFO] 172.17.0.1:30009 - 45320 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081262s
	[INFO] 172.17.0.1:63625 - 1487 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.007336191s
	[INFO] 172.17.0.1:48996 - 10165 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.009524893s
	[INFO] 172.17.0.1:6712 - 6000 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00011756s
	[INFO] 172.17.0.1:6712 - 49531 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000043954s
	[INFO] 172.17.0.1:30009 - 38847 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000025895s
	[INFO] 172.17.0.1:6712 - 35429 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036996s
	[INFO] 172.17.0.1:6712 - 55341 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000259735s
	[INFO] 172.17.0.1:6712 - 64234 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000206182s
	[INFO] 172.17.0.1:48996 - 60764 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002184552s
	[INFO] 172.17.0.1:30009 - 17545 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001281937s
	[INFO] 172.17.0.1:63625 - 32753 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002894548s
	[INFO] 172.17.0.1:48996 - 15463 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000094956s
	[INFO] 172.17.0.1:63625 - 4050 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000044003s
	[INFO] 172.17.0.1:6712 - 10450 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038998s
	[INFO] 172.17.0.1:6712 - 64319 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004637182s
	[INFO] 172.17.0.1:30009 - 45565 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.004904753s
	[INFO] 172.17.0.1:6712 - 56710 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001987929s
	[INFO] 172.17.0.1:30009 - 46637 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036315s
	[INFO] 172.17.0.1:6712 - 63801 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074386s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-642069
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-642069
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5eca87e70081d242c0fa2e2466e3725e217444d
	                    minikube.k8s.io/name=ingress-addon-legacy-642069
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T03_09_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 03:09:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-642069
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 03:10:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 03:10:28 +0000   Wed, 14 Feb 2024 03:09:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 03:10:28 +0000   Wed, 14 Feb 2024 03:09:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 03:10:28 +0000   Wed, 14 Feb 2024 03:09:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 14 Feb 2024 03:10:28 +0000   Wed, 14 Feb 2024 03:09:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-642069
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 18624d901c994287af726b7608561706
	  System UUID:                39a6be45-5a2e-4ee2-a3c7-5894f7806ea9
	  Boot ID:                    0ec78279-ad11-40d5-8717-d4c1429371b1
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-w97ll                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 coredns-66bff467f8-zrt2t                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     78s
	  kube-system                 etcd-ingress-addon-legacy-642069                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-apiserver-ingress-addon-legacy-642069             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-642069    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-proxy-r4q9h                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-scheduler-ingress-addon-legacy-642069             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (0%!)(MISSING)   170Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 104s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x4 over 104s)  kubelet     Node ingress-addon-legacy-642069 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x4 over 104s)  kubelet     Node ingress-addon-legacy-642069 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x3 over 104s)  kubelet     Node ingress-addon-legacy-642069 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  89s                  kubelet     Node ingress-addon-legacy-642069 status is now: NodeHasSufficientMemory
	  Normal  Starting                 89s                  kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    89s                  kubelet     Node ingress-addon-legacy-642069 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s                  kubelet     Node ingress-addon-legacy-642069 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             89s                  kubelet     Node ingress-addon-legacy-642069 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  89s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                79s                  kubelet     Node ingress-addon-legacy-642069 status is now: NodeReady
	  Normal  Starting                 76s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001118] FS-Cache: O-key=[8] 'f5623b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000951] FS-Cache: N-cookie d=00000000c82e8d73{9p.inode} n=000000005d7fa387
	[  +0.001080] FS-Cache: N-key=[8] 'f5623b0000000000'
	[  +0.002745] FS-Cache: Duplicate cookie detected
	[  +0.000739] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000986] FS-Cache: O-cookie d=00000000c82e8d73{9p.inode} n=0000000005c14f3f
	[  +0.001144] FS-Cache: O-key=[8] 'f5623b0000000000'
	[  +0.000723] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=00000000c82e8d73{9p.inode} n=00000000f768614b
	[  +0.001085] FS-Cache: N-key=[8] 'f5623b0000000000'
	[  +2.337756] FS-Cache: Duplicate cookie detected
	[  +0.000800] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001050] FS-Cache: O-cookie d=00000000c82e8d73{9p.inode} n=000000001ceffc46
	[  +0.001184] FS-Cache: O-key=[8] 'f4623b0000000000'
	[  +0.000778] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001065] FS-Cache: N-cookie d=00000000c82e8d73{9p.inode} n=000000005bc430d5
	[  +0.001202] FS-Cache: N-key=[8] 'f4623b0000000000'
	[  +0.373448] FS-Cache: Duplicate cookie detected
	[  +0.000816] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001030] FS-Cache: O-cookie d=00000000c82e8d73{9p.inode} n=000000004c5a19ad
	[  +0.001115] FS-Cache: O-key=[8] 'fa623b0000000000'
	[  +0.000765] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=00000000c82e8d73{9p.inode} n=00000000e1123212
	[  +0.001047] FS-Cache: N-key=[8] 'fa623b0000000000'
	
	
	==> etcd [05be0f576908] <==
	raft2024/02/14 03:09:16 INFO: aec36adc501070cc became follower at term 0
	raft2024/02/14 03:09:16 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2024/02/14 03:09:16 INFO: aec36adc501070cc became follower at term 1
	raft2024/02/14 03:09:16 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-14 03:09:16.551713 W | auth: simple token is not cryptographically signed
	2024-02-14 03:09:16.705645 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-02-14 03:09:17.062297 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/02/14 03:09:17 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2024-02-14 03:09:17.062894 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2024-02-14 03:09:17.064754 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-02-14 03:09:17.065043 I | embed: listening for peers on 192.168.49.2:2380
	2024-02-14 03:09:17.065306 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/02/14 03:09:17 INFO: aec36adc501070cc is starting a new election at term 1
	raft2024/02/14 03:09:17 INFO: aec36adc501070cc became candidate at term 2
	raft2024/02/14 03:09:17 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2024/02/14 03:09:17 INFO: aec36adc501070cc became leader at term 2
	raft2024/02/14 03:09:17 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2024-02-14 03:09:17.783223 I | etcdserver: setting up the initial cluster version to 3.4
	2024-02-14 03:09:17.783883 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-02-14 03:09:17.783962 I | etcdserver/api: enabled capabilities for version 3.4
	2024-02-14 03:09:17.783997 I | etcdserver: published {Name:ingress-addon-legacy-642069 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2024-02-14 03:09:17.784030 I | embed: ready to serve client requests
	2024-02-14 03:09:17.784240 I | embed: ready to serve client requests
	2024-02-14 03:09:17.785701 I | embed: serving client requests on 127.0.0.1:2379
	2024-02-14 03:09:17.786048 I | embed: serving client requests on 192.168.49.2:2379
	
	
	==> kernel <==
	 03:10:57 up  5:53,  0 users,  load average: 1.46, 2.08, 2.20
	Linux ingress-addon-legacy-642069 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [32f814036030] <==
	I0214 03:09:21.769719       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0214 03:09:21.816496       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0214 03:09:21.941607       1 cache.go:39] Caches are synced for autoregister controller
	I0214 03:09:21.950415       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0214 03:09:21.956427       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0214 03:09:21.956575       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0214 03:09:21.957503       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0214 03:09:22.740586       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0214 03:09:22.740618       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0214 03:09:22.754721       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0214 03:09:22.760733       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0214 03:09:22.760937       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0214 03:09:23.210196       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 03:09:23.247457       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0214 03:09:23.390180       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0214 03:09:23.391319       1 controller.go:609] quota admission added evaluator for: endpoints
	I0214 03:09:23.396139       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0214 03:09:23.679978       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 03:09:24.165649       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0214 03:09:24.834781       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0214 03:09:24.937302       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0214 03:09:39.702250       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0214 03:09:39.733339       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0214 03:09:49.216863       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0214 03:10:12.397786       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [a689db9495ff] <==
	I0214 03:09:39.857984       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0214 03:09:39.878459       1 shared_informer.go:230] Caches are synced for attach detach 
	I0214 03:09:39.927880       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I0214 03:09:40.117657       1 shared_informer.go:230] Caches are synced for job 
	I0214 03:09:40.137797       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0214 03:09:40.137838       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0214 03:09:40.147976       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"d72ac841-63e0-4342-a0d4-e9a9beeb6167", APIVersion:"apps/v1", ResourceVersion:"362", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0214 03:09:40.158381       1 shared_informer.go:230] Caches are synced for resource quota 
	I0214 03:09:40.215237       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"1a2ed813-c063-44c9-bab4-2c505aac3055", APIVersion:"apps/v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-swmfv
	I0214 03:09:40.220182       1 shared_informer.go:230] Caches are synced for resource quota 
	I0214 03:09:40.228589       1 shared_informer.go:230] Caches are synced for taint 
	I0214 03:09:40.228736       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0214 03:09:40.228827       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-642069. Assuming now as a timestamp.
	I0214 03:09:40.228873       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0214 03:09:40.229191       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-642069", UID:"a2b1d523-5b36-4301-ba9b-9b0ffa62c495", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-642069 event: Registered Node ingress-addon-legacy-642069 in Controller
	I0214 03:09:40.229318       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0214 03:09:40.229320       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0214 03:09:49.194391       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"390749dc-03b0-4790-b9fd-b1d6bdbe8c3f", APIVersion:"apps/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0214 03:09:49.226683       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"0ed66bea-178a-4856-b2cd-0ba1178c79d0", APIVersion:"apps/v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-l2rkw
	I0214 03:09:49.293981       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5da0dbee-a6a4-4024-8685-890a3d50e2ac", APIVersion:"batch/v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-6fqkp
	I0214 03:09:49.330900       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6ae56106-df5e-4973-8f9c-c9b266dc05c3", APIVersion:"batch/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-dld74
	I0214 03:09:52.165987       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5da0dbee-a6a4-4024-8685-890a3d50e2ac", APIVersion:"batch/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0214 03:09:53.189071       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6ae56106-df5e-4973-8f9c-c9b266dc05c3", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0214 03:10:22.147984       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"952ef6df-e704-4ee7-9dd2-807f0b3e0a77", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0214 03:10:22.152638       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"35e83eb1-e0a9-4a74-8761-8b577d94d67c", APIVersion:"apps/v1", ResourceVersion:"591", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-w97ll
	
	
	==> kube-proxy [548bcdc299f8] <==
	W0214 03:09:41.023280       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0214 03:09:41.038886       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0214 03:09:41.040214       1 server_others.go:186] Using iptables Proxier.
	I0214 03:09:41.045552       1 server.go:583] Version: v1.18.20
	I0214 03:09:41.050353       1 config.go:315] Starting service config controller
	I0214 03:09:41.050547       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0214 03:09:41.050789       1 config.go:133] Starting endpoints config controller
	I0214 03:09:41.050869       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0214 03:09:41.150936       1 shared_informer.go:230] Caches are synced for service config 
	I0214 03:09:41.151077       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	
	==> kube-scheduler [ded2650aacb9] <==
	W0214 03:09:21.888991       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0214 03:09:21.924249       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0214 03:09:21.924600       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0214 03:09:21.933332       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	E0214 03:09:21.953042       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0214 03:09:21.953177       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0214 03:09:21.953342       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 03:09:21.953503       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0214 03:09:21.953675       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0214 03:09:21.953834       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0214 03:09:21.954001       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 03:09:21.954161       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0214 03:09:21.954313       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0214 03:09:21.954378       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0214 03:09:21.954411       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0214 03:09:21.955350       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 03:09:21.955574       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0214 03:09:21.955922       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0214 03:09:21.956186       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 03:09:22.873282       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 03:09:22.926782       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 03:09:22.990054       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0214 03:09:23.019685       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0214 03:09:26.154707       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0214 03:09:39.803445       1 factory.go:503] pod: kube-system/coredns-66bff467f8-swmfv is already present in the active queue
	
	
	==> kubelet <==
	Feb 14 03:10:27 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:27.660830    2844 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0a33f8f7f65f16a051289c1ea652f45827bc5ebaa9a13e45ec041a1f9747c762
	Feb 14 03:10:27 ingress-addon-legacy-642069 kubelet[2844]: E0214 03:10:27.661937    2844 pod_workers.go:191] Error syncing pod b6c07708-4559-4056-8c84-491fba0ec85b ("hello-world-app-5f5d8b66bb-w97ll_default(b6c07708-4559-4056-8c84-491fba0ec85b)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-w97ll_default(b6c07708-4559-4056-8c84-491fba0ec85b)"
	Feb 14 03:10:37 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:37.935377    2844 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-hsr66" (UniqueName: "kubernetes.io/secret/52308190-b09d-47e3-85ad-169982bb2fb7-minikube-ingress-dns-token-hsr66") pod "52308190-b09d-47e3-85ad-169982bb2fb7" (UID: "52308190-b09d-47e3-85ad-169982bb2fb7")
	Feb 14 03:10:37 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:37.940887    2844 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52308190-b09d-47e3-85ad-169982bb2fb7-minikube-ingress-dns-token-hsr66" (OuterVolumeSpecName: "minikube-ingress-dns-token-hsr66") pod "52308190-b09d-47e3-85ad-169982bb2fb7" (UID: "52308190-b09d-47e3-85ad-169982bb2fb7"). InnerVolumeSpecName "minikube-ingress-dns-token-hsr66". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 14 03:10:38 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:38.035867    2844 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-hsr66" (UniqueName: "kubernetes.io/secret/52308190-b09d-47e3-85ad-169982bb2fb7-minikube-ingress-dns-token-hsr66") on node "ingress-addon-legacy-642069" DevicePath ""
	Feb 14 03:10:38 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:38.752541    2844 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 4482c41cbe6f4ac1ffb5397d0d733b5899bc582908cc387ea78ef35ee876ae26
	Feb 14 03:10:42 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:42.431199    2844 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0a33f8f7f65f16a051289c1ea652f45827bc5ebaa9a13e45ec041a1f9747c762
	Feb 14 03:10:42 ingress-addon-legacy-642069 kubelet[2844]: W0214 03:10:42.629410    2844 container.go:412] Failed to create summary reader for "/kubepods/besteffort/podb6c07708-4559-4056-8c84-491fba0ec85b/93eea55105af9a35aa136eb258f2f4b477534a7122785d2fe4f2c10cec02933b": none of the resources are being tracked.
	Feb 14 03:10:42 ingress-addon-legacy-642069 kubelet[2844]: W0214 03:10:42.784464    2844 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-w97ll through plugin: invalid network status for
	Feb 14 03:10:42 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:42.789717    2844 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 0a33f8f7f65f16a051289c1ea652f45827bc5ebaa9a13e45ec041a1f9747c762
	Feb 14 03:10:42 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:42.790055    2844 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 93eea55105af9a35aa136eb258f2f4b477534a7122785d2fe4f2c10cec02933b
	Feb 14 03:10:42 ingress-addon-legacy-642069 kubelet[2844]: E0214 03:10:42.790339    2844 pod_workers.go:191] Error syncing pod b6c07708-4559-4056-8c84-491fba0ec85b ("hello-world-app-5f5d8b66bb-w97ll_default(b6c07708-4559-4056-8c84-491fba0ec85b)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-w97ll_default(b6c07708-4559-4056-8c84-491fba0ec85b)"
	Feb 14 03:10:43 ingress-addon-legacy-642069 kubelet[2844]: W0214 03:10:43.798250    2844 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-w97ll through plugin: invalid network status for
	Feb 14 03:10:49 ingress-addon-legacy-642069 kubelet[2844]: E0214 03:10:49.299076    2844 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-l2rkw.17b39c09b9ecc2f6", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-l2rkw", UID:"5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e", APIVersion:"v1", ResourceVersion:"460", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-642069"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b29ae515fc8f6, ext:84519993674, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b29ae515fc8f6, ext:84519993674, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-l2rkw.17b39c09b9ecc2f6" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 14 03:10:49 ingress-addon-legacy-642069 kubelet[2844]: E0214 03:10:49.310630    2844 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-l2rkw.17b39c09b9ecc2f6", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-l2rkw", UID:"5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e", APIVersion:"v1", ResourceVersion:"460", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-642069"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16b29ae515fc8f6, ext:84519993674, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16b29ae51f25b3e, ext:84529599378, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-l2rkw.17b39c09b9ecc2f6" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Feb 14 03:10:51 ingress-addon-legacy-642069 kubelet[2844]: W0214 03:10:51.859729    2844 pod_container_deletor.go:77] Container "5afc8526fd15571bc7e7d67f42f3bbb705e388d5c8e8250ed54ecb63fd7db938" not found in pod's containers
	Feb 14 03:10:53 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:53.479390    2844 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-v7s2j" (UniqueName: "kubernetes.io/secret/5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e-ingress-nginx-token-v7s2j") pod "5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e" (UID: "5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e")
	Feb 14 03:10:53 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:53.481637    2844 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e-webhook-cert") pod "5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e" (UID: "5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e")
	Feb 14 03:10:53 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:53.495937    2844 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e-ingress-nginx-token-v7s2j" (OuterVolumeSpecName: "ingress-nginx-token-v7s2j") pod "5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e" (UID: "5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e"). InnerVolumeSpecName "ingress-nginx-token-v7s2j". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 14 03:10:53 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:53.497554    2844 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e" (UID: "5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Feb 14 03:10:53 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:53.582233    2844 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e-webhook-cert") on node "ingress-addon-legacy-642069" DevicePath ""
	Feb 14 03:10:53 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:53.582283    2844 reconciler.go:319] Volume detached for volume "ingress-nginx-token-v7s2j" (UniqueName: "kubernetes.io/secret/5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e-ingress-nginx-token-v7s2j") on node "ingress-addon-legacy-642069" DevicePath ""
	Feb 14 03:10:54 ingress-addon-legacy-642069 kubelet[2844]: W0214 03:10:54.444568    2844 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/5b35d8c1-5680-4cf3-9ff8-1d78e5b68e4e/volumes" does not exist
	Feb 14 03:10:55 ingress-addon-legacy-642069 kubelet[2844]: I0214 03:10:55.431030    2844 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 93eea55105af9a35aa136eb258f2f4b477534a7122785d2fe4f2c10cec02933b
	Feb 14 03:10:55 ingress-addon-legacy-642069 kubelet[2844]: E0214 03:10:55.431318    2844 pod_workers.go:191] Error syncing pod b6c07708-4559-4056-8c84-491fba0ec85b ("hello-world-app-5f5d8b66bb-w97ll_default(b6c07708-4559-4056-8c84-491fba0ec85b)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-w97ll_default(b6c07708-4559-4056-8c84-491fba0ec85b)"
	
	
	==> storage-provisioner [da30e6a335c0] <==
	I0214 03:09:43.164308       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0214 03:09:43.178547       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0214 03:09:43.178812       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0214 03:09:43.187875       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0214 03:09:43.188257       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"183c850d-887c-4ddd-a086-a200c17eab35", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-642069_75e91d33-a6a2-4bda-8e47-92448ac256f9 became leader
	I0214 03:09:43.188286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-642069_75e91d33-a6a2-4bda-8e47-92448ac256f9!
	I0214 03:09:43.289352       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-642069_75e91d33-a6a2-4bda-8e47-92448ac256f9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-642069 -n ingress-addon-legacy-642069
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-642069 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (57.85s)

                                                
                                    
x
+
TestScheduledStopUnix (34.58s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-349749 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-349749 --memory=2048 --driver=docker  --container-runtime=docker: (30.109879297s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-349749 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-349749 -n scheduled-stop-349749
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-349749 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 1418920 running but should have been killed on reschedule of stop
panic.go:523: *** TestScheduledStopUnix FAILED at 2024-02-14 03:27:27.015226636 +0000 UTC m=+1794.832167245
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect scheduled-stop-349749
helpers_test.go:235: (dbg) docker inspect scheduled-stop-349749:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7126c0545c1cb5a6c1313621b809a9eadb21c20f86cbfd6ce35757f3a1f1d918",
	        "Created": "2024-02-14T03:27:01.448887547Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1416158,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-02-14T03:27:01.732760028Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20e2d9b56eb2e595fd2b9c5719a0e58f3d7f8c692190d8fde2558cb6a9714f01",
	        "ResolvConfPath": "/var/lib/docker/containers/7126c0545c1cb5a6c1313621b809a9eadb21c20f86cbfd6ce35757f3a1f1d918/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7126c0545c1cb5a6c1313621b809a9eadb21c20f86cbfd6ce35757f3a1f1d918/hostname",
	        "HostsPath": "/var/lib/docker/containers/7126c0545c1cb5a6c1313621b809a9eadb21c20f86cbfd6ce35757f3a1f1d918/hosts",
	        "LogPath": "/var/lib/docker/containers/7126c0545c1cb5a6c1313621b809a9eadb21c20f86cbfd6ce35757f3a1f1d918/7126c0545c1cb5a6c1313621b809a9eadb21c20f86cbfd6ce35757f3a1f1d918-json.log",
	        "Name": "/scheduled-stop-349749",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-349749:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-349749",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5ea4134eb35ae739bdab687b669e954b349db8dfbdca448d3505dac3e141d9ca-init/diff:/var/lib/docker/overlay2/5910aa9960042d82258ed2c744f886c75b60e8845789b5b8e9c74bac81b955ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5ea4134eb35ae739bdab687b669e954b349db8dfbdca448d3505dac3e141d9ca/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5ea4134eb35ae739bdab687b669e954b349db8dfbdca448d3505dac3e141d9ca/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5ea4134eb35ae739bdab687b669e954b349db8dfbdca448d3505dac3e141d9ca/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-349749",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-349749/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-349749",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-349749",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-349749",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6264d51fd422d42c1c75949947aa8fe69595ff91d9bb2b832ac8bd6fbe01aca0",
	            "SandboxKey": "/var/run/docker/netns/6264d51fd422",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34194"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34193"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34190"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34192"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34191"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-349749": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7126c0545c1c",
	                        "scheduled-stop-349749"
	                    ],
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "fa1fa9b4ad47e0fb7cfe73b65b64fa2d4a6d039e56f1f9235e203af63964570c",
	                    "EndpointID": "bbc4b38d7fd0da62f6676fcbc5a767492121b81f54d8c6590567e4db2ae846b5",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "scheduled-stop-349749",
	                        "7126c0545c1c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-349749 -n scheduled-stop-349749
helpers_test.go:244: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-349749 logs -n 25
helpers_test.go:252: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p multinode-186271            | multinode-186271      | jenkins | v1.32.0 | 14 Feb 24 03:19 UTC | 14 Feb 24 03:20 UTC |
	| start   | -p multinode-186271            | multinode-186271      | jenkins | v1.32.0 | 14 Feb 24 03:20 UTC | 14 Feb 24 03:22 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	| node    | list -p multinode-186271       | multinode-186271      | jenkins | v1.32.0 | 14 Feb 24 03:22 UTC |                     |
	| node    | multinode-186271 node delete   | multinode-186271      | jenkins | v1.32.0 | 14 Feb 24 03:22 UTC | 14 Feb 24 03:22 UTC |
	|         | m03                            |                       |         |         |                     |                     |
	| stop    | multinode-186271 stop          | multinode-186271      | jenkins | v1.32.0 | 14 Feb 24 03:22 UTC | 14 Feb 24 03:22 UTC |
	| start   | -p multinode-186271            | multinode-186271      | jenkins | v1.32.0 | 14 Feb 24 03:22 UTC | 14 Feb 24 03:23 UTC |
	|         | --wait=true -v=8               |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| node    | list -p multinode-186271       | multinode-186271      | jenkins | v1.32.0 | 14 Feb 24 03:23 UTC |                     |
	| start   | -p multinode-186271-m02        | multinode-186271-m02  | jenkins | v1.32.0 | 14 Feb 24 03:23 UTC |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| start   | -p multinode-186271-m03        | multinode-186271-m03  | jenkins | v1.32.0 | 14 Feb 24 03:23 UTC | 14 Feb 24 03:24 UTC |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| node    | add -p multinode-186271        | multinode-186271      | jenkins | v1.32.0 | 14 Feb 24 03:24 UTC |                     |
	| delete  | -p multinode-186271-m03        | multinode-186271-m03  | jenkins | v1.32.0 | 14 Feb 24 03:24 UTC | 14 Feb 24 03:24 UTC |
	| delete  | -p multinode-186271            | multinode-186271      | jenkins | v1.32.0 | 14 Feb 24 03:24 UTC | 14 Feb 24 03:24 UTC |
	| start   | -p test-preload-816804         | test-preload-816804   | jenkins | v1.32.0 | 14 Feb 24 03:24 UTC | 14 Feb 24 03:25 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr              |                       |         |         |                     |                     |
	|         | --wait=true --preload=false    |                       |         |         |                     |                     |
	|         | --driver=docker                |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                       |         |         |                     |                     |
	| image   | test-preload-816804 image pull | test-preload-816804   | jenkins | v1.32.0 | 14 Feb 24 03:25 UTC | 14 Feb 24 03:25 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                       |         |         |                     |                     |
	| stop    | -p test-preload-816804         | test-preload-816804   | jenkins | v1.32.0 | 14 Feb 24 03:25 UTC | 14 Feb 24 03:25 UTC |
	| start   | -p test-preload-816804         | test-preload-816804   | jenkins | v1.32.0 | 14 Feb 24 03:25 UTC | 14 Feb 24 03:26 UTC |
	|         | --memory=2200                  |                       |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                       |         |         |                     |                     |
	|         | --wait=true --driver=docker    |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| image   | test-preload-816804 image list | test-preload-816804   | jenkins | v1.32.0 | 14 Feb 24 03:26 UTC | 14 Feb 24 03:26 UTC |
	| delete  | -p test-preload-816804         | test-preload-816804   | jenkins | v1.32.0 | 14 Feb 24 03:26 UTC | 14 Feb 24 03:26 UTC |
	| start   | -p scheduled-stop-349749       | scheduled-stop-349749 | jenkins | v1.32.0 | 14 Feb 24 03:26 UTC | 14 Feb 24 03:27 UTC |
	|         | --memory=2048 --driver=docker  |                       |         |         |                     |                     |
	|         | --container-runtime=docker     |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-349749       | scheduled-stop-349749 | jenkins | v1.32.0 | 14 Feb 24 03:27 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-349749       | scheduled-stop-349749 | jenkins | v1.32.0 | 14 Feb 24 03:27 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-349749       | scheduled-stop-349749 | jenkins | v1.32.0 | 14 Feb 24 03:27 UTC |                     |
	|         | --schedule 5m                  |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-349749       | scheduled-stop-349749 | jenkins | v1.32.0 | 14 Feb 24 03:27 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-349749       | scheduled-stop-349749 | jenkins | v1.32.0 | 14 Feb 24 03:27 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	| stop    | -p scheduled-stop-349749       | scheduled-stop-349749 | jenkins | v1.32.0 | 14 Feb 24 03:27 UTC |                     |
	|         | --schedule 15s                 |                       |         |         |                     |                     |
	|---------|--------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 03:26:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 03:26:56.428885 1415694 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:26:56.429000 1415694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:26:56.429004 1415694 out.go:304] Setting ErrFile to fd 2...
	I0214 03:26:56.429008 1415694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:26:56.429258 1415694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
	I0214 03:26:56.429670 1415694 out.go:298] Setting JSON to false
	I0214 03:26:56.430588 1415694 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":22162,"bootTime":1707859055,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 03:26:56.430647 1415694 start.go:138] virtualization:  
	I0214 03:26:56.433215 1415694 out.go:177] * [scheduled-stop-349749] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 03:26:56.435926 1415694 out.go:177]   - MINIKUBE_LOCATION=18165
	I0214 03:26:56.437480 1415694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 03:26:56.436045 1415694 notify.go:220] Checking for updates...
	I0214 03:26:56.441676 1415694 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	I0214 03:26:56.443293 1415694 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	I0214 03:26:56.445064 1415694 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 03:26:56.446756 1415694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 03:26:56.448464 1415694 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 03:26:56.469394 1415694 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 03:26:56.469493 1415694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:26:56.538303 1415694 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:46 SystemTime:2024-02-14 03:26:56.529053414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:26:56.538391 1415694 docker.go:295] overlay module found
	I0214 03:26:56.540170 1415694 out.go:177] * Using the docker driver based on user configuration
	I0214 03:26:56.541854 1415694 start.go:298] selected driver: docker
	I0214 03:26:56.541862 1415694 start.go:902] validating driver "docker" against <nil>
	I0214 03:26:56.541873 1415694 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 03:26:56.542538 1415694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:26:56.600146 1415694 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:46 SystemTime:2024-02-14 03:26:56.591498085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:26:56.600291 1415694 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 03:26:56.600498 1415694 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 03:26:56.602462 1415694 out.go:177] * Using Docker driver with root privileges
	I0214 03:26:56.604233 1415694 cni.go:84] Creating CNI manager for ""
	I0214 03:26:56.604250 1415694 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0214 03:26:56.604261 1415694 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 03:26:56.604270 1415694 start_flags.go:321] config:
	{Name:scheduled-stop-349749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-349749 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:26:56.606295 1415694 out.go:177] * Starting control plane node scheduled-stop-349749 in cluster scheduled-stop-349749
	I0214 03:26:56.608160 1415694 cache.go:121] Beginning downloading kic base image for docker with docker
	I0214 03:26:56.609798 1415694 out.go:177] * Pulling base image v0.0.42-1704759386-17866 ...
	I0214 03:26:56.611559 1415694 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0214 03:26:56.611608 1415694 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0214 03:26:56.611613 1415694 cache.go:56] Caching tarball of preloaded images
	I0214 03:26:56.611640 1415694 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 03:26:56.611734 1415694 preload.go:174] Found /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0214 03:26:56.611745 1415694 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0214 03:26:56.612083 1415694 profile.go:148] Saving config to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/config.json ...
	I0214 03:26:56.612102 1415694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/config.json: {Name:mk90f24064007c46a0f5833b72b845f1d47ea064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:26:56.626937 1415694 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon, skipping pull
	I0214 03:26:56.626951 1415694 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in daemon, skipping load
	I0214 03:26:56.626970 1415694 cache.go:194] Successfully downloaded all kic artifacts
	I0214 03:26:56.627006 1415694 start.go:365] acquiring machines lock for scheduled-stop-349749: {Name:mkaf12a58f5a0fd3e0971b8166a965a12164e4e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0214 03:26:56.627126 1415694 start.go:369] acquired machines lock for "scheduled-stop-349749" in 104.047µs
	I0214 03:26:56.627150 1415694 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-349749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-349749 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0214 03:26:56.627225 1415694 start.go:125] createHost starting for "" (driver="docker")
	I0214 03:26:56.630713 1415694 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0214 03:26:56.630982 1415694 start.go:159] libmachine.API.Create for "scheduled-stop-349749" (driver="docker")
	I0214 03:26:56.631015 1415694 client.go:168] LocalClient.Create starting
	I0214 03:26:56.631094 1415694 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem
	I0214 03:26:56.631126 1415694 main.go:141] libmachine: Decoding PEM data...
	I0214 03:26:56.631139 1415694 main.go:141] libmachine: Parsing certificate...
	I0214 03:26:56.631203 1415694 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem
	I0214 03:26:56.631219 1415694 main.go:141] libmachine: Decoding PEM data...
	I0214 03:26:56.631228 1415694 main.go:141] libmachine: Parsing certificate...
	I0214 03:26:56.631603 1415694 cli_runner.go:164] Run: docker network inspect scheduled-stop-349749 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0214 03:26:56.645920 1415694 cli_runner.go:211] docker network inspect scheduled-stop-349749 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0214 03:26:56.646008 1415694 network_create.go:281] running [docker network inspect scheduled-stop-349749] to gather additional debugging logs...
	I0214 03:26:56.646023 1415694 cli_runner.go:164] Run: docker network inspect scheduled-stop-349749
	W0214 03:26:56.660393 1415694 cli_runner.go:211] docker network inspect scheduled-stop-349749 returned with exit code 1
	I0214 03:26:56.660414 1415694 network_create.go:284] error running [docker network inspect scheduled-stop-349749]: docker network inspect scheduled-stop-349749: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-349749 not found
	I0214 03:26:56.660426 1415694 network_create.go:286] output of [docker network inspect scheduled-stop-349749]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-349749 not found
	
	** /stderr **
	I0214 03:26:56.660537 1415694 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 03:26:56.675069 1415694 network.go:212] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-123c7c386240 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:59:c3:53:2a} reservation:<nil>}
	I0214 03:26:56.675267 1415694 network.go:212] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-82cf6f25c370 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:d1:9a:e4:a3} reservation:<nil>}
	I0214 03:26:56.675607 1415694 network.go:207] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002554f00}
	I0214 03:26:56.675622 1415694 network_create.go:124] attempt to create docker network scheduled-stop-349749 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0214 03:26:56.675724 1415694 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-349749 scheduled-stop-349749
	I0214 03:26:56.738638 1415694 network_create.go:108] docker network scheduled-stop-349749 192.168.67.0/24 created
	I0214 03:26:56.738668 1415694 kic.go:121] calculated static IP "192.168.67.2" for the "scheduled-stop-349749" container
	I0214 03:26:56.738744 1415694 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0214 03:26:56.752590 1415694 cli_runner.go:164] Run: docker volume create scheduled-stop-349749 --label name.minikube.sigs.k8s.io=scheduled-stop-349749 --label created_by.minikube.sigs.k8s.io=true
	I0214 03:26:56.768272 1415694 oci.go:103] Successfully created a docker volume scheduled-stop-349749
	I0214 03:26:56.768350 1415694 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-349749-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-349749 --entrypoint /usr/bin/test -v scheduled-stop-349749:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -d /var/lib
	I0214 03:26:57.337515 1415694 oci.go:107] Successfully prepared a docker volume scheduled-stop-349749
	I0214 03:26:57.337563 1415694 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0214 03:26:57.337582 1415694 kic.go:194] Starting extracting preloaded images to volume ...
	I0214 03:26:57.337661 1415694 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-349749:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0214 03:27:01.381609 1415694 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-349749:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.04389879s)
	I0214 03:27:01.381630 1415694 kic.go:203] duration metric: took 4.044045 seconds to extract preloaded images to volume
	W0214 03:27:01.381778 1415694 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0214 03:27:01.381879 1415694 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0214 03:27:01.435395 1415694 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-349749 --name scheduled-stop-349749 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-349749 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-349749 --network scheduled-stop-349749 --ip 192.168.67.2 --volume scheduled-stop-349749:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0
	I0214 03:27:01.741977 1415694 cli_runner.go:164] Run: docker container inspect scheduled-stop-349749 --format={{.State.Running}}
	I0214 03:27:01.765054 1415694 cli_runner.go:164] Run: docker container inspect scheduled-stop-349749 --format={{.State.Status}}
	I0214 03:27:01.786370 1415694 cli_runner.go:164] Run: docker exec scheduled-stop-349749 stat /var/lib/dpkg/alternatives/iptables
	I0214 03:27:01.844094 1415694 oci.go:144] the created container "scheduled-stop-349749" has a running status.
	I0214 03:27:01.844113 1415694 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/scheduled-stop-349749/id_rsa...
	I0214 03:27:02.189250 1415694 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/scheduled-stop-349749/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0214 03:27:02.219898 1415694 cli_runner.go:164] Run: docker container inspect scheduled-stop-349749 --format={{.State.Status}}
	I0214 03:27:02.248294 1415694 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0214 03:27:02.248306 1415694 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-349749 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0214 03:27:02.341948 1415694 cli_runner.go:164] Run: docker container inspect scheduled-stop-349749 --format={{.State.Status}}
	I0214 03:27:02.366867 1415694 machine.go:88] provisioning docker machine ...
	I0214 03:27:02.366894 1415694 ubuntu.go:169] provisioning hostname "scheduled-stop-349749"
	I0214 03:27:02.366959 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:02.388642 1415694 main.go:141] libmachine: Using SSH client type: native
	I0214 03:27:02.389082 1415694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34194 <nil> <nil>}
	I0214 03:27:02.389092 1415694 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-349749 && echo "scheduled-stop-349749" | sudo tee /etc/hostname
	I0214 03:27:02.565932 1415694 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-349749
	
	I0214 03:27:02.566002 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:02.588875 1415694 main.go:141] libmachine: Using SSH client type: native
	I0214 03:27:02.589289 1415694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34194 <nil> <nil>}
	I0214 03:27:02.589305 1415694 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-349749' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-349749/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-349749' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0214 03:27:02.735554 1415694 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0214 03:27:02.735572 1415694 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18165-1266022/.minikube CaCertPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18165-1266022/.minikube}
	I0214 03:27:02.735592 1415694 ubuntu.go:177] setting up certificates
	I0214 03:27:02.735600 1415694 provision.go:83] configureAuth start
	I0214 03:27:02.735690 1415694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-349749
	I0214 03:27:02.752534 1415694 provision.go:138] copyHostCerts
	I0214 03:27:02.752585 1415694 exec_runner.go:144] found /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.pem, removing ...
	I0214 03:27:02.752592 1415694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.pem
	I0214 03:27:02.752655 1415694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.pem (1078 bytes)
	I0214 03:27:02.752737 1415694 exec_runner.go:144] found /home/jenkins/minikube-integration/18165-1266022/.minikube/cert.pem, removing ...
	I0214 03:27:02.752740 1415694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18165-1266022/.minikube/cert.pem
	I0214 03:27:02.752768 1415694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18165-1266022/.minikube/cert.pem (1123 bytes)
	I0214 03:27:02.752820 1415694 exec_runner.go:144] found /home/jenkins/minikube-integration/18165-1266022/.minikube/key.pem, removing ...
	I0214 03:27:02.752823 1415694 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18165-1266022/.minikube/key.pem
	I0214 03:27:02.752845 1415694 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18165-1266022/.minikube/key.pem (1679 bytes)
	I0214 03:27:02.752885 1415694 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-349749 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube scheduled-stop-349749]
	I0214 03:27:03.430547 1415694 provision.go:172] copyRemoteCerts
	I0214 03:27:03.430605 1415694 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0214 03:27:03.430645 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:03.447115 1415694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34194 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/scheduled-stop-349749/id_rsa Username:docker}
	I0214 03:27:03.544481 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0214 03:27:03.568411 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0214 03:27:03.592850 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0214 03:27:03.615855 1415694 provision.go:86] duration metric: configureAuth took 880.24137ms
	I0214 03:27:03.615879 1415694 ubuntu.go:193] setting minikube options for container-runtime
	I0214 03:27:03.616059 1415694 config.go:182] Loaded profile config "scheduled-stop-349749": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0214 03:27:03.616110 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:03.631972 1415694 main.go:141] libmachine: Using SSH client type: native
	I0214 03:27:03.632365 1415694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34194 <nil> <nil>}
	I0214 03:27:03.632376 1415694 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0214 03:27:03.763974 1415694 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0214 03:27:03.763985 1415694 ubuntu.go:71] root file system type: overlay
	I0214 03:27:03.764096 1415694 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0214 03:27:03.764168 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:03.780023 1415694 main.go:141] libmachine: Using SSH client type: native
	I0214 03:27:03.780455 1415694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34194 <nil> <nil>}
	I0214 03:27:03.780532 1415694 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0214 03:27:03.922652 1415694 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0214 03:27:03.922738 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:03.941122 1415694 main.go:141] libmachine: Using SSH client type: native
	I0214 03:27:03.941536 1415694 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bf490] 0x3c1c00 <nil>  [] 0s} 127.0.0.1 34194 <nil> <nil>}
	I0214 03:27:03.941552 1415694 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0214 03:27:04.677767 1415694 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-10-26 09:06:20.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-02-14 03:27:03.919010281 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0214 03:27:04.677795 1415694 machine.go:91] provisioned docker machine in 2.310916085s
	I0214 03:27:04.677807 1415694 client.go:171] LocalClient.Create took 8.046786584s
	I0214 03:27:04.677826 1415694 start.go:167] duration metric: libmachine.API.Create for "scheduled-stop-349749" took 8.04684492s
	I0214 03:27:04.677833 1415694 start.go:300] post-start starting for "scheduled-stop-349749" (driver="docker")
	I0214 03:27:04.677843 1415694 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0214 03:27:04.677951 1415694 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0214 03:27:04.677993 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:04.693754 1415694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34194 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/scheduled-stop-349749/id_rsa Username:docker}
	I0214 03:27:04.788601 1415694 ssh_runner.go:195] Run: cat /etc/os-release
	I0214 03:27:04.791459 1415694 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0214 03:27:04.791485 1415694 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0214 03:27:04.791495 1415694 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0214 03:27:04.791501 1415694 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0214 03:27:04.791510 1415694 filesync.go:126] Scanning /home/jenkins/minikube-integration/18165-1266022/.minikube/addons for local assets ...
	I0214 03:27:04.791566 1415694 filesync.go:126] Scanning /home/jenkins/minikube-integration/18165-1266022/.minikube/files for local assets ...
	I0214 03:27:04.791645 1415694 filesync.go:149] local asset: /home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/12713802.pem -> 12713802.pem in /etc/ssl/certs
	I0214 03:27:04.791778 1415694 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0214 03:27:04.799895 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/12713802.pem --> /etc/ssl/certs/12713802.pem (1708 bytes)
	I0214 03:27:04.823230 1415694 start.go:303] post-start completed in 145.382411ms
	I0214 03:27:04.823604 1415694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-349749
	I0214 03:27:04.838869 1415694 profile.go:148] Saving config to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/config.json ...
	I0214 03:27:04.839138 1415694 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 03:27:04.839179 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:04.854388 1415694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34194 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/scheduled-stop-349749/id_rsa Username:docker}
	I0214 03:27:04.944110 1415694 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0214 03:27:04.948101 1415694 start.go:128] duration metric: createHost completed in 8.320861245s
	I0214 03:27:04.948118 1415694 start.go:83] releasing machines lock for "scheduled-stop-349749", held for 8.320985705s
	I0214 03:27:04.948189 1415694 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-349749
	I0214 03:27:04.962825 1415694 ssh_runner.go:195] Run: cat /version.json
	I0214 03:27:04.962868 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:04.963114 1415694 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0214 03:27:04.963160 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:04.981457 1415694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34194 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/scheduled-stop-349749/id_rsa Username:docker}
	I0214 03:27:04.982844 1415694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34194 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/scheduled-stop-349749/id_rsa Username:docker}
	I0214 03:27:05.208162 1415694 ssh_runner.go:195] Run: systemctl --version
	I0214 03:27:05.212227 1415694 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0214 03:27:05.220568 1415694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0214 03:27:05.245650 1415694 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0214 03:27:05.245736 1415694 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0214 03:27:05.275330 1415694 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0214 03:27:05.275347 1415694 start.go:475] detecting cgroup driver to use...
	I0214 03:27:05.275390 1415694 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 03:27:05.275500 1415694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 03:27:05.292537 1415694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0214 03:27:05.302093 1415694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0214 03:27:05.311389 1415694 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0214 03:27:05.311464 1415694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0214 03:27:05.321060 1415694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 03:27:05.330836 1415694 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0214 03:27:05.340618 1415694 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0214 03:27:05.350260 1415694 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0214 03:27:05.359192 1415694 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0214 03:27:05.368753 1415694 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0214 03:27:05.377177 1415694 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0214 03:27:05.385555 1415694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 03:27:05.486154 1415694 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0214 03:27:05.589750 1415694 start.go:475] detecting cgroup driver to use...
	I0214 03:27:05.589786 1415694 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0214 03:27:05.589835 1415694 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0214 03:27:05.605477 1415694 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0214 03:27:05.605541 1415694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0214 03:27:05.620502 1415694 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0214 03:27:05.638687 1415694 ssh_runner.go:195] Run: which cri-dockerd
	I0214 03:27:05.642265 1415694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0214 03:27:05.652226 1415694 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0214 03:27:05.671412 1415694 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0214 03:27:05.778753 1415694 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0214 03:27:05.890080 1415694 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0214 03:27:05.890197 1415694 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0214 03:27:05.911282 1415694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 03:27:06.009951 1415694 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0214 03:27:06.246810 1415694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0214 03:27:06.258941 1415694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0214 03:27:06.270798 1415694 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0214 03:27:06.360605 1415694 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0214 03:27:06.454437 1415694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 03:27:06.542633 1415694 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0214 03:27:06.556284 1415694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0214 03:27:06.567217 1415694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0214 03:27:06.653757 1415694 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0214 03:27:06.731135 1415694 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0214 03:27:06.731207 1415694 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0214 03:27:06.734885 1415694 start.go:543] Will wait 60s for crictl version
	I0214 03:27:06.734941 1415694 ssh_runner.go:195] Run: which crictl
	I0214 03:27:06.738685 1415694 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0214 03:27:06.790589 1415694 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I0214 03:27:06.790661 1415694 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0214 03:27:06.813747 1415694 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0214 03:27:06.841447 1415694 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I0214 03:27:06.841536 1415694 cli_runner.go:164] Run: docker network inspect scheduled-stop-349749 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0214 03:27:06.856536 1415694 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0214 03:27:06.860071 1415694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 03:27:06.870993 1415694 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0214 03:27:06.871055 1415694 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0214 03:27:06.889007 1415694 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0214 03:27:06.889021 1415694 docker.go:615] Images already preloaded, skipping extraction
	I0214 03:27:06.889088 1415694 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0214 03:27:06.906982 1415694 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0214 03:27:06.906997 1415694 cache_images.go:84] Images are preloaded, skipping loading
	I0214 03:27:06.907071 1415694 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0214 03:27:06.963424 1415694 cni.go:84] Creating CNI manager for ""
	I0214 03:27:06.963440 1415694 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0214 03:27:06.963456 1415694 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0214 03:27:06.963474 1415694 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-349749 NodeName:scheduled-stop-349749 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0214 03:27:06.963626 1415694 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "scheduled-stop-349749"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0214 03:27:06.963713 1415694 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=scheduled-stop-349749 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-349749 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0214 03:27:06.963777 1415694 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0214 03:27:06.972650 1415694 binaries.go:44] Found k8s binaries, skipping transfer
	I0214 03:27:06.972715 1415694 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0214 03:27:06.981476 1415694 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0214 03:27:07.001187 1415694 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0214 03:27:07.021105 1415694 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0214 03:27:07.039346 1415694 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0214 03:27:07.042680 1415694 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0214 03:27:07.053631 1415694 certs.go:56] Setting up /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749 for IP: 192.168.67.2
	I0214 03:27:07.053657 1415694 certs.go:190] acquiring lock for shared ca certs: {Name:mk38eec77f10b2e9943b70dec5fadf9f48ce78cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:27:07.053787 1415694 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.key
	I0214 03:27:07.053826 1415694 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.key
	I0214 03:27:07.053879 1415694 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/client.key
	I0214 03:27:07.053888 1415694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/client.crt with IP's: []
	I0214 03:27:07.481047 1415694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/client.crt ...
	I0214 03:27:07.481061 1415694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/client.crt: {Name:mk6051a7857790e67bc084315bed80a6e17abd89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:27:07.481262 1415694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/client.key ...
	I0214 03:27:07.481270 1415694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/client.key: {Name:mk9286f710ed7b01884aeb3dccba935a50a6f7f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:27:07.481359 1415694 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.key.c7fa3a9e
	I0214 03:27:07.481371 1415694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0214 03:27:08.334189 1415694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.crt.c7fa3a9e ...
	I0214 03:27:08.334213 1415694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.crt.c7fa3a9e: {Name:mk18fd95372695284ac11b42dbf281d2e497d807 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:27:08.334425 1415694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.key.c7fa3a9e ...
	I0214 03:27:08.334435 1415694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.key.c7fa3a9e: {Name:mk5ceedf8208fd791c8936fec4711033669e84a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:27:08.334519 1415694 certs.go:337] copying /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.crt
	I0214 03:27:08.334599 1415694 certs.go:341] copying /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.key
	I0214 03:27:08.334647 1415694 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/proxy-client.key
	I0214 03:27:08.334657 1415694 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/proxy-client.crt with IP's: []
	I0214 03:27:08.577594 1415694 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/proxy-client.crt ...
	I0214 03:27:08.577608 1415694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/proxy-client.crt: {Name:mkb11d3efb9940a40cf5fe9d27f5d151e2f97b35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:27:08.577802 1415694 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/proxy-client.key ...
	I0214 03:27:08.577816 1415694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/proxy-client.key: {Name:mk537769661bbe29d5b98bbb072dd8137871fa6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:27:08.578006 1415694 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/1271380.pem (1338 bytes)
	W0214 03:27:08.578042 1415694 certs.go:433] ignoring /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/1271380_empty.pem, impossibly tiny 0 bytes
	I0214 03:27:08.578051 1415694 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca-key.pem (1675 bytes)
	I0214 03:27:08.578077 1415694 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/ca.pem (1078 bytes)
	I0214 03:27:08.578104 1415694 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/cert.pem (1123 bytes)
	I0214 03:27:08.578125 1415694 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/certs/key.pem (1679 bytes)
	I0214 03:27:08.578169 1415694 certs.go:437] found cert: /home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/12713802.pem (1708 bytes)
	I0214 03:27:08.578806 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0214 03:27:08.603572 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0214 03:27:08.627842 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0214 03:27:08.651693 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/scheduled-stop-349749/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0214 03:27:08.676317 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0214 03:27:08.700275 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0214 03:27:08.724460 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0214 03:27:08.749588 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0214 03:27:08.773142 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/ssl/certs/12713802.pem --> /usr/share/ca-certificates/12713802.pem (1708 bytes)
	I0214 03:27:08.797931 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0214 03:27:08.821947 1415694 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18165-1266022/.minikube/certs/1271380.pem --> /usr/share/ca-certificates/1271380.pem (1338 bytes)
	I0214 03:27:08.845274 1415694 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0214 03:27:08.863183 1415694 ssh_runner.go:195] Run: openssl version
	I0214 03:27:08.868561 1415694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12713802.pem && ln -fs /usr/share/ca-certificates/12713802.pem /etc/ssl/certs/12713802.pem"
	I0214 03:27:08.878111 1415694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12713802.pem
	I0214 03:27:08.882695 1415694 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Feb 14 03:03 /usr/share/ca-certificates/12713802.pem
	I0214 03:27:08.882754 1415694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12713802.pem
	I0214 03:27:08.889742 1415694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12713802.pem /etc/ssl/certs/3ec20f2e.0"
	I0214 03:27:08.899081 1415694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0214 03:27:08.908315 1415694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:27:08.911715 1415694 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Feb 14 02:58 /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:27:08.911771 1415694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0214 03:27:08.918907 1415694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0214 03:27:08.928332 1415694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1271380.pem && ln -fs /usr/share/ca-certificates/1271380.pem /etc/ssl/certs/1271380.pem"
	I0214 03:27:08.938104 1415694 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1271380.pem
	I0214 03:27:08.941480 1415694 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Feb 14 03:03 /usr/share/ca-certificates/1271380.pem
	I0214 03:27:08.941542 1415694 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1271380.pem
	I0214 03:27:08.948646 1415694 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1271380.pem /etc/ssl/certs/51391683.0"
	I0214 03:27:08.958148 1415694 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0214 03:27:08.961394 1415694 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0214 03:27:08.961436 1415694 kubeadm.go:404] StartCluster: {Name:scheduled-stop-349749 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:scheduled-stop-349749 Namespace:default APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:27:08.961546 1415694 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0214 03:27:08.977838 1415694 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0214 03:27:08.986828 1415694 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0214 03:27:08.995546 1415694 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0214 03:27:08.995602 1415694 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0214 03:27:09.006196 1415694 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0214 03:27:09.006233 1415694 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0214 03:27:09.111336 1415694 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1053-aws\n", err: exit status 1
	I0214 03:27:09.194665 1415694 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0214 03:27:24.527520 1415694 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0214 03:27:24.527577 1415694 kubeadm.go:322] [preflight] Running pre-flight checks
	I0214 03:27:24.527674 1415694 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0214 03:27:24.527725 1415694 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1053-aws
	I0214 03:27:24.527756 1415694 kubeadm.go:322] OS: Linux
	I0214 03:27:24.527803 1415694 kubeadm.go:322] CGROUPS_CPU: enabled
	I0214 03:27:24.527858 1415694 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0214 03:27:24.527901 1415694 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0214 03:27:24.527948 1415694 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0214 03:27:24.527993 1415694 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0214 03:27:24.528037 1415694 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0214 03:27:24.528104 1415694 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0214 03:27:24.528158 1415694 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0214 03:27:24.528218 1415694 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0214 03:27:24.528298 1415694 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0214 03:27:24.528389 1415694 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0214 03:27:24.528473 1415694 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0214 03:27:24.528529 1415694 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0214 03:27:24.530663 1415694 out.go:204]   - Generating certificates and keys ...
	I0214 03:27:24.530742 1415694 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0214 03:27:24.530800 1415694 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0214 03:27:24.530861 1415694 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0214 03:27:24.530912 1415694 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0214 03:27:24.530966 1415694 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0214 03:27:24.531011 1415694 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0214 03:27:24.531059 1415694 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0214 03:27:24.531172 1415694 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-349749] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0214 03:27:24.531219 1415694 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0214 03:27:24.531330 1415694 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-349749] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0214 03:27:24.531388 1415694 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0214 03:27:24.531445 1415694 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0214 03:27:24.531491 1415694 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0214 03:27:24.531547 1415694 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0214 03:27:24.531600 1415694 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0214 03:27:24.531649 1415694 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0214 03:27:24.531724 1415694 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0214 03:27:24.531773 1415694 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0214 03:27:24.531856 1415694 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0214 03:27:24.531915 1415694 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0214 03:27:24.535154 1415694 out.go:204]   - Booting up control plane ...
	I0214 03:27:24.535259 1415694 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0214 03:27:24.535331 1415694 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0214 03:27:24.535390 1415694 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0214 03:27:24.535484 1415694 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0214 03:27:24.535570 1415694 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0214 03:27:24.535605 1415694 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0214 03:27:24.535771 1415694 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0214 03:27:24.535840 1415694 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.002447 seconds
	I0214 03:27:24.535935 1415694 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0214 03:27:24.536059 1415694 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0214 03:27:24.536112 1415694 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0214 03:27:24.536282 1415694 kubeadm.go:322] [mark-control-plane] Marking the node scheduled-stop-349749 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0214 03:27:24.536332 1415694 kubeadm.go:322] [bootstrap-token] Using token: g0em0r.5x3qvssaol2a7jf1
	I0214 03:27:24.538608 1415694 out.go:204]   - Configuring RBAC rules ...
	I0214 03:27:24.538830 1415694 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0214 03:27:24.538927 1415694 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0214 03:27:24.539077 1415694 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0214 03:27:24.539202 1415694 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0214 03:27:24.539317 1415694 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0214 03:27:24.539415 1415694 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0214 03:27:24.539529 1415694 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0214 03:27:24.539571 1415694 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0214 03:27:24.539616 1415694 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0214 03:27:24.539620 1415694 kubeadm.go:322] 
	I0214 03:27:24.539704 1415694 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0214 03:27:24.539709 1415694 kubeadm.go:322] 
	I0214 03:27:24.539789 1415694 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0214 03:27:24.539793 1415694 kubeadm.go:322] 
	I0214 03:27:24.539815 1415694 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0214 03:27:24.539970 1415694 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0214 03:27:24.540034 1415694 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0214 03:27:24.540038 1415694 kubeadm.go:322] 
	I0214 03:27:24.540087 1415694 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0214 03:27:24.540090 1415694 kubeadm.go:322] 
	I0214 03:27:24.540132 1415694 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0214 03:27:24.540136 1415694 kubeadm.go:322] 
	I0214 03:27:24.540183 1415694 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0214 03:27:24.540257 1415694 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0214 03:27:24.540322 1415694 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0214 03:27:24.540325 1415694 kubeadm.go:322] 
	I0214 03:27:24.540407 1415694 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0214 03:27:24.540484 1415694 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0214 03:27:24.540491 1415694 kubeadm.go:322] 
	I0214 03:27:24.540579 1415694 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token g0em0r.5x3qvssaol2a7jf1 \
	I0214 03:27:24.540681 1415694 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:09ff10076c39b3ceb6e5878c674bc11df5aa3639e198c3f6e30e096135e90185 \
	I0214 03:27:24.540700 1415694 kubeadm.go:322] 	--control-plane 
	I0214 03:27:24.540703 1415694 kubeadm.go:322] 
	I0214 03:27:24.540780 1415694 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0214 03:27:24.540783 1415694 kubeadm.go:322] 
	I0214 03:27:24.540891 1415694 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token g0em0r.5x3qvssaol2a7jf1 \
	I0214 03:27:24.541021 1415694 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:09ff10076c39b3ceb6e5878c674bc11df5aa3639e198c3f6e30e096135e90185 
	I0214 03:27:24.541035 1415694 cni.go:84] Creating CNI manager for ""
	I0214 03:27:24.541053 1415694 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0214 03:27:24.544593 1415694 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0214 03:27:24.546577 1415694 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0214 03:27:24.558585 1415694 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0214 03:27:24.601359 1415694 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0214 03:27:24.601452 1415694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:27:24.601521 1415694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=a5eca87e70081d242c0fa2e2466e3725e217444d minikube.k8s.io/name=scheduled-stop-349749 minikube.k8s.io/updated_at=2024_02_14T03_27_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0214 03:27:24.887221 1415694 ops.go:34] apiserver oom_adj: -16
	I0214 03:27:24.887237 1415694 kubeadm.go:1088] duration metric: took 285.844839ms to wait for elevateKubeSystemPrivileges.
	I0214 03:27:24.887247 1415694 kubeadm.go:406] StartCluster complete in 15.925817634s
	I0214 03:27:24.887262 1415694 settings.go:142] acquiring lock: {Name:mka5ccfc6e6b301490609b4401d47e44477d3784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:27:24.887318 1415694 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18165-1266022/kubeconfig
	I0214 03:27:24.888315 1415694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/kubeconfig: {Name:mk66f7cad9af599b8ab92f8fcd3383675b5457c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 03:27:24.891332 1415694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0214 03:27:24.891616 1415694 config.go:182] Loaded profile config "scheduled-stop-349749": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0214 03:27:24.891649 1415694 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0214 03:27:24.891811 1415694 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-349749"
	I0214 03:27:24.891819 1415694 addons.go:234] Setting addon storage-provisioner=true in "scheduled-stop-349749"
	I0214 03:27:24.891874 1415694 host.go:66] Checking if "scheduled-stop-349749" exists ...
	I0214 03:27:24.892416 1415694 cli_runner.go:164] Run: docker container inspect scheduled-stop-349749 --format={{.State.Status}}
	I0214 03:27:24.893737 1415694 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-349749"
	I0214 03:27:24.893751 1415694 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-349749"
	I0214 03:27:24.894066 1415694 cli_runner.go:164] Run: docker container inspect scheduled-stop-349749 --format={{.State.Status}}
	I0214 03:27:24.918727 1415694 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0214 03:27:24.920543 1415694 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 03:27:24.920554 1415694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0214 03:27:24.920622 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:24.952715 1415694 addons.go:234] Setting addon default-storageclass=true in "scheduled-stop-349749"
	I0214 03:27:24.952742 1415694 host.go:66] Checking if "scheduled-stop-349749" exists ...
	I0214 03:27:24.953183 1415694 cli_runner.go:164] Run: docker container inspect scheduled-stop-349749 --format={{.State.Status}}
	I0214 03:27:24.967809 1415694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34194 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/scheduled-stop-349749/id_rsa Username:docker}
	I0214 03:27:24.982460 1415694 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0214 03:27:24.982480 1415694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0214 03:27:24.982560 1415694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-349749
	I0214 03:27:25.007935 1415694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34194 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/scheduled-stop-349749/id_rsa Username:docker}
	I0214 03:27:25.074759 1415694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0214 03:27:25.177431 1415694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0214 03:27:25.215086 1415694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0214 03:27:25.396928 1415694 kapi.go:248] "coredns" deployment in "kube-system" namespace and "scheduled-stop-349749" context rescaled to 1 replicas
	I0214 03:27:25.396956 1415694 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0214 03:27:25.400526 1415694 out.go:177] * Verifying Kubernetes components...
	I0214 03:27:25.402569 1415694 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:27:26.153182 1415694 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.078380354s)
	I0214 03:27:26.153200 1415694 start.go:929] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0214 03:27:26.358603 1415694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.181145415s)
	I0214 03:27:26.358653 1415694 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.143555172s)
	I0214 03:27:26.359769 1415694 api_server.go:52] waiting for apiserver process to appear ...
	I0214 03:27:26.359831 1415694 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 03:27:26.369099 1415694 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0214 03:27:26.371413 1415694 addons.go:505] enable addons completed in 1.479750667s: enabled=[storage-provisioner default-storageclass]
	I0214 03:27:26.372162 1415694 api_server.go:72] duration metric: took 975.180752ms to wait for apiserver process to appear ...
	I0214 03:27:26.372172 1415694 api_server.go:88] waiting for apiserver healthz status ...
	I0214 03:27:26.372190 1415694 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0214 03:27:26.382173 1415694 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0214 03:27:26.383709 1415694 api_server.go:141] control plane version: v1.28.4
	I0214 03:27:26.383734 1415694 api_server.go:131] duration metric: took 11.552024ms to wait for apiserver health ...
	I0214 03:27:26.383741 1415694 system_pods.go:43] waiting for kube-system pods to appear ...
	I0214 03:27:26.390587 1415694 system_pods.go:59] 5 kube-system pods found
	I0214 03:27:26.390621 1415694 system_pods.go:61] "etcd-scheduled-stop-349749" [62109357-528d-4a6a-8e51-32482a1da4f4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0214 03:27:26.390631 1415694 system_pods.go:61] "kube-apiserver-scheduled-stop-349749" [de126834-2a38-4570-9f66-86b920363e0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0214 03:27:26.390640 1415694 system_pods.go:61] "kube-controller-manager-scheduled-stop-349749" [30216700-6d31-478b-82dc-f9c2a61d2399] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0214 03:27:26.390647 1415694 system_pods.go:61] "kube-scheduler-scheduled-stop-349749" [8ec92d0c-667c-42ee-a57e-1496724fae32] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0214 03:27:26.390656 1415694 system_pods.go:61] "storage-provisioner" [21fe8d14-643d-4f54-b85d-f5ec965ba4a7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0214 03:27:26.390663 1415694 system_pods.go:74] duration metric: took 6.916586ms to wait for pod list to return data ...
	I0214 03:27:26.390672 1415694 kubeadm.go:581] duration metric: took 993.69558ms to wait for : map[apiserver:true system_pods:true] ...
	I0214 03:27:26.390683 1415694 node_conditions.go:102] verifying NodePressure condition ...
	I0214 03:27:26.394042 1415694 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0214 03:27:26.394060 1415694 node_conditions.go:123] node cpu capacity is 2
	I0214 03:27:26.394070 1415694 node_conditions.go:105] duration metric: took 3.383382ms to run NodePressure ...
	I0214 03:27:26.394080 1415694 start.go:228] waiting for startup goroutines ...
	I0214 03:27:26.394085 1415694 start.go:233] waiting for cluster config update ...
	I0214 03:27:26.394094 1415694 start.go:242] writing updated cluster config ...
	I0214 03:27:26.394427 1415694 ssh_runner.go:195] Run: rm -f paused
	I0214 03:27:26.462392 1415694 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0214 03:27:26.464708 1415694 out.go:177] * Done! kubectl is now configured to use "scheduled-stop-349749" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 14 03:27:06 scheduled-stop-349749 dockerd[1101]: time="2024-02-14T03:27:06.085628712Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 14 03:27:06 scheduled-stop-349749 dockerd[1101]: time="2024-02-14T03:27:06.099914477Z" level=info msg="Loading containers: start."
	Feb 14 03:27:06 scheduled-stop-349749 dockerd[1101]: time="2024-02-14T03:27:06.183021755Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 14 03:27:06 scheduled-stop-349749 dockerd[1101]: time="2024-02-14T03:27:06.215392181Z" level=info msg="Loading containers: done."
	Feb 14 03:27:06 scheduled-stop-349749 dockerd[1101]: time="2024-02-14T03:27:06.225262688Z" level=info msg="Docker daemon" commit=311b9ff graphdriver=overlay2 version=24.0.7
	Feb 14 03:27:06 scheduled-stop-349749 dockerd[1101]: time="2024-02-14T03:27:06.225335761Z" level=info msg="Daemon has completed initialization"
	Feb 14 03:27:06 scheduled-stop-349749 dockerd[1101]: time="2024-02-14T03:27:06.244624183Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 14 03:27:06 scheduled-stop-349749 systemd[1]: Started Docker Application Container Engine.
	Feb 14 03:27:06 scheduled-stop-349749 dockerd[1101]: time="2024-02-14T03:27:06.246405931Z" level=info msg="API listen on [::]:2376"
	Feb 14 03:27:06 scheduled-stop-349749 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	Feb 14 03:27:06 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:06Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Feb 14 03:27:06 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:06Z" level=info msg="Start docker client with request timeout 0s"
	Feb 14 03:27:06 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:06Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Feb 14 03:27:06 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:06Z" level=info msg="Loaded network plugin cni"
	Feb 14 03:27:06 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:06Z" level=info msg="Docker cri networking managed by network plugin cni"
	Feb 14 03:27:06 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:06Z" level=info msg="Docker Info: &{ID:7734decc-8c3d-4e6e-87a3-3997a1958cff Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:35 SystemTime:2024-02-14T03:27:06.720694204Z LoggingDriver:json-file CgroupDriver:cgroupfs CgroupVersion:1 NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:
Ubuntu 22.04.3 LTS (containerized) OSVersion:22.04 OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0x400041c770 NCPU:2 MemTotal:8215035904 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:scheduled-stop-349749 Labels:[provider=docker] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dd1e886e55dd695541fdcd67420c2888645a495 Expected:3dd1e886e55dd695541fdcd67420c2888645a495} RuncCommit:{ID:v1.1.10-0-g18a0cb0 Expected:v1.1.10-0-g18a0cb0} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin] Produc
tLicense: DefaultAddressPools:[] Warnings:[]}"
	Feb 14 03:27:06 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:06Z" level=info msg="Setting cgroupDriver cgroupfs"
	Feb 14 03:27:06 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:06Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Feb 14 03:27:06 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:06Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Feb 14 03:27:06 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:06Z" level=info msg="Start cri-dockerd grpc backend"
	Feb 14 03:27:06 scheduled-stop-349749 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Feb 14 03:27:17 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/182324b8c73737535950faed9e996c03108e42dc8d67a4d7761382bc09f74137/resolv.conf as [nameserver 192.168.67.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Feb 14 03:27:17 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5b1a129d100dcfb317fa1c4362a2fc4be3d497ebc70e5a850fc8cad87b9e9004/resolv.conf as [nameserver 192.168.67.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Feb 14 03:27:17 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e3b6da5c313b76e10c194541825b16ad0c097862104be90c8f7a1f93b52c244c/resolv.conf as [nameserver 192.168.67.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Feb 14 03:27:17 scheduled-stop-349749 cri-dockerd[1301]: time="2024-02-14T03:27:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b514c99a1616d5ca0868adcd3a201ef4ee8c5c4bb8b095f8ba6809c24a782974/resolv.conf as [nameserver 192.168.67.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c8bc7421b0777       04b4c447bb9d4       10 seconds ago      Running             kube-apiserver            0                   b514c99a1616d       kube-apiserver-scheduled-stop-349749
	052eb38343803       9cdd6470f48c8       10 seconds ago      Running             etcd                      0                   5b1a129d100dc       etcd-scheduled-stop-349749
	e27a7bed51778       9961cbceaf234       10 seconds ago      Running             kube-controller-manager   0                   e3b6da5c313b7       kube-controller-manager-scheduled-stop-349749
	ca1c47a7cf0f7       05c284c929889       10 seconds ago      Running             kube-scheduler            0                   182324b8c7373       kube-scheduler-scheduled-stop-349749
	
	
	==> describe nodes <==
	Name:               scheduled-stop-349749
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-349749
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a5eca87e70081d242c0fa2e2466e3725e217444d
	                    minikube.k8s.io/name=scheduled-stop-349749
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_02_14T03_27_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 14 Feb 2024 03:27:21 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-349749
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 14 Feb 2024 03:27:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 14 Feb 2024 03:27:24 +0000   Wed, 14 Feb 2024 03:27:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 14 Feb 2024 03:27:24 +0000   Wed, 14 Feb 2024 03:27:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 14 Feb 2024 03:27:24 +0000   Wed, 14 Feb 2024 03:27:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 14 Feb 2024 03:27:24 +0000   Wed, 14 Feb 2024 03:27:24 +0000   KubeletNotReady              container runtime status check may not have completed yet
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    scheduled-stop-349749
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 0700c894d5924082ab4aefd1df2beefc
	  System UUID:                2e84ce44-677a-4fdc-a439-9a0ea2d3196d
	  Boot ID:                    0ec78279-ad11-40d5-8717-d4c1429371b1
	  Kernel Version:             5.15.0-1053-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-349749                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5s
	  kube-system                 kube-apiserver-scheduled-stop-349749             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-349749    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-scheduled-stop-349749             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s    kubelet  Node scheduled-stop-349749 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s    kubelet  Node scheduled-stop-349749 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s    kubelet  Node scheduled-stop-349749 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             4s    kubelet  Node scheduled-stop-349749 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  4s    kubelet  Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.001118] FS-Cache: O-key=[8] 'f5623b0000000000'
	[  +0.000719] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000951] FS-Cache: N-cookie d=00000000c82e8d73{9p.inode} n=000000005d7fa387
	[  +0.001080] FS-Cache: N-key=[8] 'f5623b0000000000'
	[  +0.002745] FS-Cache: Duplicate cookie detected
	[  +0.000739] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.000986] FS-Cache: O-cookie d=00000000c82e8d73{9p.inode} n=0000000005c14f3f
	[  +0.001144] FS-Cache: O-key=[8] 'f5623b0000000000'
	[  +0.000723] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=00000000c82e8d73{9p.inode} n=00000000f768614b
	[  +0.001085] FS-Cache: N-key=[8] 'f5623b0000000000'
	[  +2.337756] FS-Cache: Duplicate cookie detected
	[  +0.000800] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001050] FS-Cache: O-cookie d=00000000c82e8d73{9p.inode} n=000000001ceffc46
	[  +0.001184] FS-Cache: O-key=[8] 'f4623b0000000000'
	[  +0.000778] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001065] FS-Cache: N-cookie d=00000000c82e8d73{9p.inode} n=000000005bc430d5
	[  +0.001202] FS-Cache: N-key=[8] 'f4623b0000000000'
	[  +0.373448] FS-Cache: Duplicate cookie detected
	[  +0.000816] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001030] FS-Cache: O-cookie d=00000000c82e8d73{9p.inode} n=000000004c5a19ad
	[  +0.001115] FS-Cache: O-key=[8] 'fa623b0000000000'
	[  +0.000765] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000952] FS-Cache: N-cookie d=00000000c82e8d73{9p.inode} n=00000000e1123212
	[  +0.001047] FS-Cache: N-key=[8] 'fa623b0000000000'
	
	
	==> etcd [052eb3834380] <==
	{"level":"info","ts":"2024-02-14T03:27:18.08379Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-02-14T03:27:18.084041Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8688e899f7831fc7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-02-14T03:27:18.084159Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T03:27:18.08419Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T03:27:18.084198Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-02-14T03:27:18.084412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2024-02-14T03:27:18.084476Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-02-14T03:27:18.351703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-02-14T03:27:18.351956Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-02-14T03:27:18.352123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-02-14T03:27:18.352246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-02-14T03:27:18.352354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-14T03:27:18.352477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-02-14T03:27:18.352573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-02-14T03:27:18.355797Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:27:18.359885Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:scheduled-stop-349749 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-02-14T03:27:18.36016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T03:27:18.360692Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-02-14T03:27:18.360828Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-02-14T03:27:18.36196Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-02-14T03:27:18.360371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-02-14T03:27:18.366432Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-02-14T03:27:18.36033Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:27:18.368995Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-02-14T03:27:18.369166Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 03:27:28 up  6:09,  0 users,  load average: 2.17, 1.90, 2.02
	Linux scheduled-stop-349749 5.15.0-1053-aws #58~20.04.1-Ubuntu SMP Mon Jan 22 17:19:04 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	
	==> kube-apiserver [c8bc7421b077] <==
	I0214 03:27:21.693046       1 autoregister_controller.go:141] Starting autoregister controller
	I0214 03:27:21.693081       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0214 03:27:21.693130       1 cache.go:39] Caches are synced for autoregister controller
	I0214 03:27:21.719697       1 shared_informer.go:318] Caches are synced for configmaps
	I0214 03:27:21.720882       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0214 03:27:21.727346       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0214 03:27:21.727369       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0214 03:27:21.727493       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0214 03:27:21.728360       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0214 03:27:21.731493       1 controller.go:624] quota admission added evaluator for: namespaces
	I0214 03:27:21.743970       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0214 03:27:21.765610       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0214 03:27:22.426122       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0214 03:27:22.430454       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0214 03:27:22.430644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0214 03:27:22.960849       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0214 03:27:23.009013       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0214 03:27:23.065641       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0214 03:27:23.077914       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0214 03:27:23.079181       1 controller.go:624] quota admission added evaluator for: endpoints
	I0214 03:27:23.085074       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0214 03:27:23.641795       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0214 03:27:24.375712       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0214 03:27:24.391258       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0214 03:27:24.402373       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [e27a7bed5177] <==
	I0214 03:27:25.497026       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0214 03:27:25.497080       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0214 03:27:25.497135       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0214 03:27:25.497288       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0214 03:27:25.497413       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0214 03:27:25.497551       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0214 03:27:25.497665       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="networkpolicies.networking.k8s.io"
	I0214 03:27:25.497797       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0214 03:27:25.497945       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="serviceaccounts"
	I0214 03:27:25.498075       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="deployments.apps"
	I0214 03:27:25.498183       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0214 03:27:25.498297       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0214 03:27:25.498412       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="replicasets.apps"
	I0214 03:27:25.498522       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="statefulsets.apps"
	I0214 03:27:25.498691       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0214 03:27:25.498827       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0214 03:27:25.498955       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpointslices.discovery.k8s.io"
	I0214 03:27:25.499070       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0214 03:27:25.499098       1 controllermanager.go:642] "Started controller" controller="resourcequota-controller"
	I0214 03:27:25.499325       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0214 03:27:25.499568       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0214 03:27:25.499675       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0214 03:27:25.660168       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0214 03:27:25.660456       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0214 03:27:25.660579       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	
	
	==> kube-scheduler [ca1c47a7cf0f] <==
	E0214 03:27:21.683914       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 03:27:21.684042       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0214 03:27:21.684155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0214 03:27:21.684296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0214 03:27:21.684331       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0214 03:27:21.684547       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0214 03:27:22.570309       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0214 03:27:22.570546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0214 03:27:22.584418       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0214 03:27:22.584624       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0214 03:27:22.627548       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0214 03:27:22.627801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0214 03:27:22.660215       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0214 03:27:22.660419       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0214 03:27:22.668888       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0214 03:27:22.669098       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0214 03:27:22.675512       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0214 03:27:22.675549       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0214 03:27:22.695793       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0214 03:27:22.696011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0214 03:27:22.699418       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0214 03:27:22.699605       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0214 03:27:22.953158       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0214 03:27:22.953359       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0214 03:27:24.862700       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.844325    2339 topology_manager.go:215] "Topology Admit Handler" podUID="e961e0f2551659c9afd36579a75aecd7" podNamespace="kube-system" podName="kube-apiserver-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.844392    2339 topology_manager.go:215] "Topology Admit Handler" podUID="e6c1886f55face108f38dbdc76fdc609" podNamespace="kube-system" podName="kube-controller-manager-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.844451    2339 topology_manager.go:215] "Topology Admit Handler" podUID="040d7f6f0e0ec8dade9c7b3ecca450f5" podNamespace="kube-system" podName="kube-scheduler-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: E0214 03:27:24.863520    2339 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-scheduled-stop-349749\" already exists" pod="kube-system/etcd-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921648    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e6c1886f55face108f38dbdc76fdc609-ca-certs\") pod \"kube-controller-manager-scheduled-stop-349749\" (UID: \"e6c1886f55face108f38dbdc76fdc609\") " pod="kube-system/kube-controller-manager-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921697    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6c1886f55face108f38dbdc76fdc609-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-349749\" (UID: \"e6c1886f55face108f38dbdc76fdc609\") " pod="kube-system/kube-controller-manager-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921722    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e6c1886f55face108f38dbdc76fdc609-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-349749\" (UID: \"e6c1886f55face108f38dbdc76fdc609\") " pod="kube-system/kube-controller-manager-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921748    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6c1886f55face108f38dbdc76fdc609-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-349749\" (UID: \"e6c1886f55face108f38dbdc76fdc609\") " pod="kube-system/kube-controller-manager-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921776    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/ac35a2db8461e24077133cd46589561a-etcd-certs\") pod \"etcd-scheduled-stop-349749\" (UID: \"ac35a2db8461e24077133cd46589561a\") " pod="kube-system/etcd-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921798    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e961e0f2551659c9afd36579a75aecd7-ca-certs\") pod \"kube-apiserver-scheduled-stop-349749\" (UID: \"e961e0f2551659c9afd36579a75aecd7\") " pod="kube-system/kube-apiserver-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921840    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e6c1886f55face108f38dbdc76fdc609-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-349749\" (UID: \"e6c1886f55face108f38dbdc76fdc609\") " pod="kube-system/kube-controller-manager-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921864    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e961e0f2551659c9afd36579a75aecd7-etc-ca-certificates\") pod \"kube-apiserver-scheduled-stop-349749\" (UID: \"e961e0f2551659c9afd36579a75aecd7\") " pod="kube-system/kube-apiserver-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921889    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e961e0f2551659c9afd36579a75aecd7-k8s-certs\") pod \"kube-apiserver-scheduled-stop-349749\" (UID: \"e961e0f2551659c9afd36579a75aecd7\") " pod="kube-system/kube-apiserver-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921922    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e961e0f2551659c9afd36579a75aecd7-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-349749\" (UID: \"e961e0f2551659c9afd36579a75aecd7\") " pod="kube-system/kube-apiserver-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921948    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e6c1886f55face108f38dbdc76fdc609-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-349749\" (UID: \"e6c1886f55face108f38dbdc76fdc609\") " pod="kube-system/kube-controller-manager-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.921982    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6c1886f55face108f38dbdc76fdc609-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-349749\" (UID: \"e6c1886f55face108f38dbdc76fdc609\") " pod="kube-system/kube-controller-manager-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.922009    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/040d7f6f0e0ec8dade9c7b3ecca450f5-kubeconfig\") pod \"kube-scheduler-scheduled-stop-349749\" (UID: \"040d7f6f0e0ec8dade9c7b3ecca450f5\") " pod="kube-system/kube-scheduler-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.922030    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/ac35a2db8461e24077133cd46589561a-etcd-data\") pod \"etcd-scheduled-stop-349749\" (UID: \"ac35a2db8461e24077133cd46589561a\") " pod="kube-system/etcd-scheduled-stop-349749"
	Feb 14 03:27:24 scheduled-stop-349749 kubelet[2339]: I0214 03:27:24.922054    2339 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e961e0f2551659c9afd36579a75aecd7-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-349749\" (UID: \"e961e0f2551659c9afd36579a75aecd7\") " pod="kube-system/kube-apiserver-scheduled-stop-349749"
	Feb 14 03:27:25 scheduled-stop-349749 kubelet[2339]: I0214 03:27:25.459364    2339 apiserver.go:52] "Watching apiserver"
	Feb 14 03:27:25 scheduled-stop-349749 kubelet[2339]: I0214 03:27:25.500391    2339 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Feb 14 03:27:25 scheduled-stop-349749 kubelet[2339]: I0214 03:27:25.764027    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-349749" podStartSLOduration=1.763957996 podCreationTimestamp="2024-02-14 03:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 03:27:25.719849598 +0000 UTC m=+1.372976492" watchObservedRunningTime="2024-02-14 03:27:25.763957996 +0000 UTC m=+1.417084882"
	Feb 14 03:27:25 scheduled-stop-349749 kubelet[2339]: I0214 03:27:25.776974    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-349749" podStartSLOduration=2.7769284709999997 podCreationTimestamp="2024-02-14 03:27:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 03:27:25.764571256 +0000 UTC m=+1.417698143" watchObservedRunningTime="2024-02-14 03:27:25.776928471 +0000 UTC m=+1.430055357"
	Feb 14 03:27:25 scheduled-stop-349749 kubelet[2339]: I0214 03:27:25.818991    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-349749" podStartSLOduration=1.8189485680000002 podCreationTimestamp="2024-02-14 03:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 03:27:25.777506499 +0000 UTC m=+1.430633377" watchObservedRunningTime="2024-02-14 03:27:25.818948568 +0000 UTC m=+1.472075454"
	Feb 14 03:27:25 scheduled-stop-349749 kubelet[2339]: I0214 03:27:25.882009    2339 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-349749" podStartSLOduration=1.881954161 podCreationTimestamp="2024-02-14 03:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-14 03:27:25.819965601 +0000 UTC m=+1.473092487" watchObservedRunningTime="2024-02-14 03:27:25.881954161 +0000 UTC m=+1.535081047"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-349749 -n scheduled-stop-349749
helpers_test.go:261: (dbg) Run:  kubectl --context scheduled-stop-349749 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context scheduled-stop-349749 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context scheduled-stop-349749 describe pod storage-provisioner: exit status 1 (83.540925ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context scheduled-stop-349749 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-349749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-349749
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-349749: (2.089440902s)
--- FAIL: TestScheduledStopUnix (34.58s)

                                                
                                    

Test pass (305/335)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.69
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
9 TestDownloadOnly/v1.16.0/DeleteAll 0.22
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 15.76
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.21
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.29.0-rc.2/json-events 15.89
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.44
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.28
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.57
31 TestOffline 98.54
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 143.04
38 TestAddons/parallel/Registry 15.08
40 TestAddons/parallel/InspektorGadget 11.73
41 TestAddons/parallel/MetricsServer 5.99
44 TestAddons/parallel/CSI 59.65
45 TestAddons/parallel/Headlamp 12.51
46 TestAddons/parallel/CloudSpanner 5.69
47 TestAddons/parallel/LocalPath 8.71
48 TestAddons/parallel/NvidiaDevicePlugin 5.51
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.26
53 TestAddons/StoppedEnableDisable 11.26
54 TestCertOptions 39
55 TestCertExpiration 252.54
56 TestDockerFlags 45.31
57 TestForceSystemdFlag 47.34
58 TestForceSystemdEnv 41.85
64 TestErrorSpam/setup 37.2
65 TestErrorSpam/start 0.77
66 TestErrorSpam/status 1.02
67 TestErrorSpam/pause 1.33
68 TestErrorSpam/unpause 1.43
69 TestErrorSpam/stop 11.01
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 82.27
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 34.98
76 TestFunctional/serial/KubeContext 0.09
77 TestFunctional/serial/KubectlGetPods 0.13
80 TestFunctional/serial/CacheCmd/cache/add_remote 2.73
81 TestFunctional/serial/CacheCmd/cache/add_local 1.06
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.08
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
86 TestFunctional/serial/CacheCmd/cache/delete 0.14
87 TestFunctional/serial/MinikubeKubectlCmd 0.14
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
89 TestFunctional/serial/ExtraConfig 41.99
90 TestFunctional/serial/ComponentHealth 0.11
91 TestFunctional/serial/LogsCmd 1.24
92 TestFunctional/serial/LogsFileCmd 1.23
93 TestFunctional/serial/InvalidService 4.51
95 TestFunctional/parallel/ConfigCmd 0.65
96 TestFunctional/parallel/DashboardCmd 13.02
97 TestFunctional/parallel/DryRun 0.49
98 TestFunctional/parallel/InternationalLanguage 0.24
99 TestFunctional/parallel/StatusCmd 1.19
103 TestFunctional/parallel/ServiceCmdConnect 11.72
104 TestFunctional/parallel/AddonsCmd 0.27
105 TestFunctional/parallel/PersistentVolumeClaim 28.11
107 TestFunctional/parallel/SSHCmd 0.72
108 TestFunctional/parallel/CpCmd 2.59
110 TestFunctional/parallel/FileSync 0.41
111 TestFunctional/parallel/CertSync 2.23
115 TestFunctional/parallel/NodeLabels 0.1
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
119 TestFunctional/parallel/License 0.34
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.5
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
130 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
131 TestFunctional/parallel/ServiceCmd/DeployApp 8.22
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
133 TestFunctional/parallel/ProfileCmd/profile_list 0.38
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
135 TestFunctional/parallel/ServiceCmd/List 0.81
136 TestFunctional/parallel/MountCmd/any-port 6.78
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
139 TestFunctional/parallel/ServiceCmd/Format 0.49
140 TestFunctional/parallel/ServiceCmd/URL 0.39
141 TestFunctional/parallel/MountCmd/specific-port 2.17
142 TestFunctional/parallel/MountCmd/VerifyCleanup 2.1
143 TestFunctional/parallel/Version/short 0.08
144 TestFunctional/parallel/Version/components 1.08
145 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
146 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
147 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
148 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
149 TestFunctional/parallel/ImageCommands/ImageBuild 2.59
150 TestFunctional/parallel/ImageCommands/Setup 2.64
151 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.96
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.33
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
155 TestFunctional/parallel/DockerEnv/bash 1.46
156 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.51
157 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.78
158 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.9
159 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
160 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.39
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.97
162 TestFunctional/delete_addon-resizer_images 0.08
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestImageBuild/serial/Setup 34.59
169 TestImageBuild/serial/NormalBuild 1.68
170 TestImageBuild/serial/BuildWithBuildArg 0.87
171 TestImageBuild/serial/BuildWithDockerIgnore 0.69
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.7
175 TestIngressAddonLegacy/StartLegacyK8sCluster 84.51
177 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.93
178 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.57
182 TestJSONOutput/start/Command 85.3
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.61
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.52
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 5.75
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.25
207 TestKicCustomNetwork/create_custom_network 36.67
208 TestKicCustomNetwork/use_default_bridge_network 33.26
209 TestKicExistingNetwork 31.52
210 TestKicCustomSubnet 35.43
211 TestKicStaticIP 35.1
212 TestMainNoArgs 0.06
213 TestMinikubeProfile 68.42
216 TestMountStart/serial/StartWithMountFirst 7.41
217 TestMountStart/serial/VerifyMountFirst 0.26
218 TestMountStart/serial/StartWithMountSecond 8.5
219 TestMountStart/serial/VerifyMountSecond 0.26
220 TestMountStart/serial/DeleteFirst 1.45
221 TestMountStart/serial/VerifyMountPostDelete 0.28
222 TestMountStart/serial/Stop 1.21
223 TestMountStart/serial/RestartStopped 11.06
224 TestMountStart/serial/VerifyMountPostStop 0.27
227 TestMultiNode/serial/FreshStart2Nodes 79.02
228 TestMultiNode/serial/DeployApp2Nodes 42.84
229 TestMultiNode/serial/PingHostFrom2Pods 1.07
230 TestMultiNode/serial/AddNode 20.5
231 TestMultiNode/serial/MultiNodeLabels 0.09
232 TestMultiNode/serial/ProfileList 0.35
233 TestMultiNode/serial/CopyFile 10.48
234 TestMultiNode/serial/StopNode 2.26
235 TestMultiNode/serial/StartAfterStop 13.31
236 TestMultiNode/serial/RestartKeepsNodes 124.53
237 TestMultiNode/serial/DeleteNode 5.12
238 TestMultiNode/serial/StopMultiNode 21.64
239 TestMultiNode/serial/RestartMultiNode 84.27
240 TestMultiNode/serial/ValidateNameConflict 36.06
245 TestPreload 143.5
248 TestSkaffold 122.86
250 TestInsufficientStorage 11.06
251 TestRunningBinaryUpgrade 126
253 TestKubernetesUpgrade 415.24
254 TestMissingContainerUpgrade 119.89
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.17
257 TestNoKubernetes/serial/StartWithK8s 43.58
258 TestNoKubernetes/serial/StartWithStopK8s 16.64
259 TestNoKubernetes/serial/Start 9.96
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
261 TestNoKubernetes/serial/ProfileList 0.94
262 TestNoKubernetes/serial/Stop 1.23
263 TestNoKubernetes/serial/StartNoArgs 7.41
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
276 TestStoppedBinaryUpgrade/Setup 1.27
277 TestStoppedBinaryUpgrade/Upgrade 83.17
278 TestStoppedBinaryUpgrade/MinikubeLogs 1.55
280 TestPause/serial/Start 50.65
281 TestPause/serial/SecondStartNoReconfiguration 39.54
282 TestPause/serial/Pause 0.62
283 TestPause/serial/VerifyStatus 0.32
284 TestPause/serial/Unpause 0.56
285 TestPause/serial/PauseAgain 0.85
286 TestPause/serial/DeletePaused 2.24
287 TestPause/serial/VerifyDeletedResources 0.37
295 TestNetworkPlugins/group/auto/Start 89.34
296 TestNetworkPlugins/group/auto/KubeletFlags 0.39
297 TestNetworkPlugins/group/auto/NetCatPod 11.38
298 TestNetworkPlugins/group/auto/DNS 0.25
299 TestNetworkPlugins/group/auto/Localhost 0.22
300 TestNetworkPlugins/group/auto/HairPin 0.23
301 TestNetworkPlugins/group/kindnet/Start 71.81
302 TestNetworkPlugins/group/calico/Start 89.89
303 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
305 TestNetworkPlugins/group/kindnet/NetCatPod 11.37
306 TestNetworkPlugins/group/kindnet/DNS 0.26
307 TestNetworkPlugins/group/kindnet/Localhost 0.25
308 TestNetworkPlugins/group/kindnet/HairPin 0.24
309 TestNetworkPlugins/group/custom-flannel/Start 65.4
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/calico/KubeletFlags 0.42
312 TestNetworkPlugins/group/calico/NetCatPod 13.39
313 TestNetworkPlugins/group/calico/DNS 0.3
314 TestNetworkPlugins/group/calico/Localhost 0.27
315 TestNetworkPlugins/group/calico/HairPin 0.27
316 TestNetworkPlugins/group/false/Start 94.57
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.42
319 TestNetworkPlugins/group/custom-flannel/DNS 0.24
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
322 TestNetworkPlugins/group/enable-default-cni/Start 84.32
323 TestNetworkPlugins/group/false/KubeletFlags 0.32
324 TestNetworkPlugins/group/false/NetCatPod 10.3
325 TestNetworkPlugins/group/false/DNS 0.19
326 TestNetworkPlugins/group/false/Localhost 0.18
327 TestNetworkPlugins/group/false/HairPin 0.19
328 TestNetworkPlugins/group/flannel/Start 71.85
329 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
330 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.4
331 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
332 TestNetworkPlugins/group/enable-default-cni/Localhost 0.25
333 TestNetworkPlugins/group/enable-default-cni/HairPin 0.26
334 TestNetworkPlugins/group/bridge/Start 90.3
335 TestNetworkPlugins/group/flannel/ControllerPod 6.01
336 TestNetworkPlugins/group/flannel/KubeletFlags 0.32
337 TestNetworkPlugins/group/flannel/NetCatPod 12.26
338 TestNetworkPlugins/group/flannel/DNS 0.22
339 TestNetworkPlugins/group/flannel/Localhost 0.17
340 TestNetworkPlugins/group/flannel/HairPin 0.2
341 TestNetworkPlugins/group/kubenet/Start 52.43
342 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
343 TestNetworkPlugins/group/bridge/NetCatPod 12.35
344 TestNetworkPlugins/group/bridge/DNS 0.33
345 TestNetworkPlugins/group/bridge/Localhost 0.26
346 TestNetworkPlugins/group/bridge/HairPin 0.29
347 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
348 TestNetworkPlugins/group/kubenet/NetCatPod 14.3
350 TestStartStop/group/old-k8s-version/serial/FirstStart 138.04
351 TestNetworkPlugins/group/kubenet/DNS 0.22
352 TestNetworkPlugins/group/kubenet/Localhost 0.18
353 TestNetworkPlugins/group/kubenet/HairPin 0.26
355 TestStartStop/group/no-preload/serial/FirstStart 56.66
356 TestStartStop/group/no-preload/serial/DeployApp 8.36
357 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.1
358 TestStartStop/group/no-preload/serial/Stop 10.94
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/no-preload/serial/SecondStart 317.39
361 TestStartStop/group/old-k8s-version/serial/DeployApp 10.62
362 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.04
363 TestStartStop/group/old-k8s-version/serial/Stop 10.94
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
365 TestStartStop/group/old-k8s-version/serial/SecondStart 438.19
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
367 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
369 TestStartStop/group/no-preload/serial/Pause 3.27
371 TestStartStop/group/embed-certs/serial/FirstStart 86.2
372 TestStartStop/group/embed-certs/serial/DeployApp 10.4
373 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
374 TestStartStop/group/embed-certs/serial/Stop 11
375 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
376 TestStartStop/group/embed-certs/serial/SecondStart 320.63
377 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
378 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
379 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
380 TestStartStop/group/old-k8s-version/serial/Pause 3
382 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.5
383 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.36
384 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.18
385 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.95
386 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
387 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 341.04
388 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.01
389 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
390 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
391 TestStartStop/group/embed-certs/serial/Pause 3
393 TestStartStop/group/newest-cni/serial/FirstStart 47.35
394 TestStartStop/group/newest-cni/serial/DeployApp 0
395 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
396 TestStartStop/group/newest-cni/serial/Stop 9.02
397 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
398 TestStartStop/group/newest-cni/serial/SecondStart 31.97
399 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
401 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
402 TestStartStop/group/newest-cni/serial/Pause 3
403 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 15
404 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
405 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
406 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.84
x
+
TestDownloadOnly/v1.16.0/json-events (18.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-704261 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-704261 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (18.694091297s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (18.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-704261
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-704261: exit status 85 (95.165252ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-704261 | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC |          |
	|         | -p download-only-704261        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 02:57:32
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 02:57:32.296463 1271386 out.go:291] Setting OutFile to fd 1 ...
	I0214 02:57:32.296585 1271386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:57:32.296590 1271386 out.go:304] Setting ErrFile to fd 2...
	I0214 02:57:32.296596 1271386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:57:32.296837 1271386 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
	W0214 02:57:32.296974 1271386 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18165-1266022/.minikube/config/config.json: open /home/jenkins/minikube-integration/18165-1266022/.minikube/config/config.json: no such file or directory
	I0214 02:57:32.297421 1271386 out.go:298] Setting JSON to true
	I0214 02:57:32.298238 1271386 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20398,"bootTime":1707859055,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 02:57:32.298306 1271386 start.go:138] virtualization:  
	I0214 02:57:32.301436 1271386 out.go:97] [download-only-704261] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 02:57:32.303693 1271386 out.go:169] MINIKUBE_LOCATION=18165
	W0214 02:57:32.301661 1271386 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball: no such file or directory
	I0214 02:57:32.301702 1271386 notify.go:220] Checking for updates...
	I0214 02:57:32.305605 1271386 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 02:57:32.307449 1271386 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	I0214 02:57:32.309386 1271386 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	I0214 02:57:32.311424 1271386 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0214 02:57:32.315856 1271386 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 02:57:32.316108 1271386 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 02:57:32.336856 1271386 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 02:57:32.336961 1271386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:57:32.405863 1271386 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-14 02:57:32.396906293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:57:32.405963 1271386 docker.go:295] overlay module found
	I0214 02:57:32.407897 1271386 out.go:97] Using the docker driver based on user configuration
	I0214 02:57:32.407923 1271386 start.go:298] selected driver: docker
	I0214 02:57:32.407929 1271386 start.go:902] validating driver "docker" against <nil>
	I0214 02:57:32.408036 1271386 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:57:32.464401 1271386 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:56 SystemTime:2024-02-14 02:57:32.455780681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:57:32.464561 1271386 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 02:57:32.464862 1271386 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0214 02:57:32.465014 1271386 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 02:57:32.467308 1271386 out.go:169] Using Docker driver with root privileges
	I0214 02:57:32.469393 1271386 cni.go:84] Creating CNI manager for ""
	I0214 02:57:32.469424 1271386 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0214 02:57:32.469443 1271386 start_flags.go:321] config:
	{Name:download-only-704261 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-704261 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:57:32.471618 1271386 out.go:97] Starting control plane node download-only-704261 in cluster download-only-704261
	I0214 02:57:32.471641 1271386 cache.go:121] Beginning downloading kic base image for docker with docker
	I0214 02:57:32.473852 1271386 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0214 02:57:32.473903 1271386 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0214 02:57:32.474058 1271386 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 02:57:32.488127 1271386 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 02:57:32.488831 1271386 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 02:57:32.488957 1271386 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 02:57:32.561643 1271386 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0214 02:57:32.561667 1271386 cache.go:56] Caching tarball of preloaded images
	I0214 02:57:32.562332 1271386 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0214 02:57:32.564519 1271386 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0214 02:57:32.564541 1271386 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0214 02:57:32.684919 1271386 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0214 02:57:40.821815 1271386 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-704261"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-704261
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (15.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-131058 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-131058 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=docker  --container-runtime=docker: (15.756648464s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (15.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-131058
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-131058: exit status 85 (83.739889ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-704261 | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC |                     |
	|         | -p download-only-704261        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	| delete  | -p download-only-704261        | download-only-704261 | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	| start   | -o=json --download-only        | download-only-131058 | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC |                     |
	|         | -p download-only-131058        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 02:57:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 02:57:51.440771 1271552 out.go:291] Setting OutFile to fd 1 ...
	I0214 02:57:51.440899 1271552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:57:51.440909 1271552 out.go:304] Setting ErrFile to fd 2...
	I0214 02:57:51.440915 1271552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:57:51.441145 1271552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
	I0214 02:57:51.441598 1271552 out.go:298] Setting JSON to true
	I0214 02:57:51.442434 1271552 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20417,"bootTime":1707859055,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 02:57:51.442499 1271552 start.go:138] virtualization:  
	I0214 02:57:51.445066 1271552 out.go:97] [download-only-131058] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 02:57:51.447095 1271552 out.go:169] MINIKUBE_LOCATION=18165
	I0214 02:57:51.445266 1271552 notify.go:220] Checking for updates...
	I0214 02:57:51.451378 1271552 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 02:57:51.453455 1271552 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	I0214 02:57:51.455282 1271552 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	I0214 02:57:51.456877 1271552 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0214 02:57:51.460186 1271552 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 02:57:51.460506 1271552 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 02:57:51.480864 1271552 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 02:57:51.480957 1271552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:57:51.549541 1271552 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:57:51.539811041 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:57:51.549639 1271552 docker.go:295] overlay module found
	I0214 02:57:51.551835 1271552 out.go:97] Using the docker driver based on user configuration
	I0214 02:57:51.551858 1271552 start.go:298] selected driver: docker
	I0214 02:57:51.551865 1271552 start.go:902] validating driver "docker" against <nil>
	I0214 02:57:51.551969 1271552 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:57:51.612633 1271552 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:57:51.602890699 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:57:51.612796 1271552 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 02:57:51.613052 1271552 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0214 02:57:51.613197 1271552 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 02:57:51.615151 1271552 out.go:169] Using Docker driver with root privileges
	I0214 02:57:51.617031 1271552 cni.go:84] Creating CNI manager for ""
	I0214 02:57:51.617061 1271552 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0214 02:57:51.617074 1271552 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 02:57:51.617086 1271552 start_flags.go:321] config:
	{Name:download-only-131058 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-131058 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:57:51.619069 1271552 out.go:97] Starting control plane node download-only-131058 in cluster download-only-131058
	I0214 02:57:51.619096 1271552 cache.go:121] Beginning downloading kic base image for docker with docker
	I0214 02:57:51.620870 1271552 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0214 02:57:51.620903 1271552 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0214 02:57:51.621070 1271552 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 02:57:51.635043 1271552 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 02:57:51.635187 1271552 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 02:57:51.635211 1271552 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0214 02:57:51.635219 1271552 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0214 02:57:51.635228 1271552 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 02:57:51.684704 1271552 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0214 02:57:51.684733 1271552 cache.go:56] Caching tarball of preloaded images
	I0214 02:57:51.684912 1271552 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0214 02:57:51.686986 1271552 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0214 02:57:51.687019 1271552 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0214 02:57:51.800991 1271552 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0214 02:58:05.423811 1271552 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0214 02:58:05.423928 1271552 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0214 02:58:06.266511 1271552 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I0214 02:58:06.266886 1271552 profile.go:148] Saving config to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/download-only-131058/config.json ...
	I0214 02:58:06.266918 1271552 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/download-only-131058/config.json: {Name:mk2840a175401dfc587d89710197a4712151afe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:06.267564 1271552 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0214 02:58:06.267768 1271552 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/linux/arm64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-131058"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-131058
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (15.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-110193 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-110193 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (15.886446123s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (15.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-110193
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-110193: exit status 85 (440.480014ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-704261 | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC |                     |
	|         | -p download-only-704261           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	| delete  | -p download-only-704261           | download-only-704261 | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC | 14 Feb 24 02:57 UTC |
	| start   | -o=json --download-only           | download-only-131058 | jenkins | v1.32.0 | 14 Feb 24 02:57 UTC |                     |
	|         | -p download-only-131058           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	| delete  | -p download-only-131058           | download-only-131058 | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC | 14 Feb 24 02:58 UTC |
	| start   | -o=json --download-only           | download-only-110193 | jenkins | v1.32.0 | 14 Feb 24 02:58 UTC |                     |
	|         | -p download-only-110193           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=docker                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/02/14 02:58:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0214 02:58:07.647907 1271713 out.go:291] Setting OutFile to fd 1 ...
	I0214 02:58:07.648033 1271713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:58:07.648043 1271713 out.go:304] Setting ErrFile to fd 2...
	I0214 02:58:07.648049 1271713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 02:58:07.648305 1271713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
	I0214 02:58:07.648737 1271713 out.go:298] Setting JSON to true
	I0214 02:58:07.649575 1271713 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20433,"bootTime":1707859055,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 02:58:07.649645 1271713 start.go:138] virtualization:  
	I0214 02:58:07.652523 1271713 out.go:97] [download-only-110193] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 02:58:07.652777 1271713 notify.go:220] Checking for updates...
	I0214 02:58:07.654984 1271713 out.go:169] MINIKUBE_LOCATION=18165
	I0214 02:58:07.657400 1271713 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 02:58:07.659545 1271713 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	I0214 02:58:07.661525 1271713 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	I0214 02:58:07.663672 1271713 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0214 02:58:07.667429 1271713 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0214 02:58:07.667748 1271713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 02:58:07.687906 1271713 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 02:58:07.688018 1271713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:58:07.756984 1271713 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:58:07.747768466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:58:07.757157 1271713 docker.go:295] overlay module found
	I0214 02:58:07.759137 1271713 out.go:97] Using the docker driver based on user configuration
	I0214 02:58:07.759166 1271713 start.go:298] selected driver: docker
	I0214 02:58:07.759173 1271713 start.go:902] validating driver "docker" against <nil>
	I0214 02:58:07.759279 1271713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 02:58:07.812272 1271713 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:49 SystemTime:2024-02-14 02:58:07.803633285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 02:58:07.812428 1271713 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0214 02:58:07.812711 1271713 start_flags.go:392] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0214 02:58:07.812855 1271713 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0214 02:58:07.815076 1271713 out.go:169] Using Docker driver with root privileges
	I0214 02:58:07.817059 1271713 cni.go:84] Creating CNI manager for ""
	I0214 02:58:07.817087 1271713 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0214 02:58:07.817101 1271713 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0214 02:58:07.817115 1271713 start_flags.go:321] config:
	{Name:download-only-110193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-110193 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 02:58:07.819202 1271713 out.go:97] Starting control plane node download-only-110193 in cluster download-only-110193
	I0214 02:58:07.819224 1271713 cache.go:121] Beginning downloading kic base image for docker with docker
	I0214 02:58:07.821112 1271713 out.go:97] Pulling base image v0.0.42-1704759386-17866 ...
	I0214 02:58:07.821148 1271713 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0214 02:58:07.821340 1271713 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local docker daemon
	I0214 02:58:07.835546 1271713 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 to local cache
	I0214 02:58:07.835724 1271713 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory
	I0214 02:58:07.835750 1271713 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 in local cache directory, skipping pull
	I0214 02:58:07.835764 1271713 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 exists in cache, skipping pull
	I0214 02:58:07.835773 1271713 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 as a tarball
	I0214 02:58:07.878673 1271713 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0214 02:58:07.878705 1271713 cache.go:56] Caching tarball of preloaded images
	I0214 02:58:07.878886 1271713 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0214 02:58:07.880851 1271713 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0214 02:58:07.880884 1271713 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0214 02:58:07.990405 1271713 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0214 02:58:20.301819 1271713 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0214 02:58:20.301937 1271713 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0214 02:58:21.101832 1271713 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on docker
	I0214 02:58:21.102236 1271713 profile.go:148] Saving config to /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/download-only-110193/config.json ...
	I0214 02:58:21.102274 1271713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/download-only-110193/config.json: {Name:mkeced660dec8b7a72e4815c2b5b28473a03fc80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0214 02:58:21.102472 1271713 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0214 02:58:21.102676 1271713 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18165-1266022/.minikube/cache/linux/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-110193"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-110193
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-981860 --alsologtostderr --binary-mirror http://127.0.0.1:44401 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-981860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-981860
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (98.54s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-014016 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-014016 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m36.142794261s)
helpers_test.go:175: Cleaning up "offline-docker-014016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-014016
E0214 03:31:22.974792 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-014016: (2.401577696s)
--- PASS: TestOffline (98.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-565438
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-565438: exit status 85 (78.180508ms)

                                                
                                                
-- stdout --
	* Profile "addons-565438" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-565438"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-565438
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-565438: exit status 85 (86.317589ms)

                                                
                                                
-- stdout --
	* Profile "addons-565438" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-565438"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (143.04s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-565438 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-565438 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (2m23.035188501s)
--- PASS: TestAddons/Setup (143.04s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 46.080614ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-cdg2k" [5d4d93ef-1e75-4b2a-994e-2292fc6b92a2] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004845654s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r45hf" [bc7041d8-bd4a-4d32-9849-568b3e754650] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005314281s
addons_test.go:340: (dbg) Run:  kubectl --context addons-565438 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-565438 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-565438 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.906139216s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 ip
2024/02/14 03:01:03 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.08s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8s7cl" [7992e48e-cfac-47bb-ad4a-3681ad07c7fd] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00430494s
addons_test.go:841: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-565438
addons_test.go:841: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-565438: (5.723783006s)
--- PASS: TestAddons/parallel/InspektorGadget (11.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.99s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 7.464235ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-t6ph7" [26ff7f7f-fc20-4e02-a4ec-20b2ee3cd0ad] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011346577s
addons_test.go:415: (dbg) Run:  kubectl --context addons-565438 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.99s)

                                                
                                    
x
+
TestAddons/parallel/CSI (59.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 10.714232ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-565438 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-565438 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [47242931-1776-476e-8572-b1c296c6c48e] Pending
helpers_test.go:344: "task-pv-pod" [47242931-1776-476e-8572-b1c296c6c48e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [47242931-1776-476e-8572-b1c296c6c48e] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004448749s
addons_test.go:584: (dbg) Run:  kubectl --context addons-565438 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-565438 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-565438 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-565438 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-565438 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-565438 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-565438 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3cb8ab97-7e39-4018-9193-fdf72308e286] Pending
helpers_test.go:344: "task-pv-pod-restore" [3cb8ab97-7e39-4018-9193-fdf72308e286] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3cb8ab97-7e39-4018-9193-fdf72308e286] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003752287s
addons_test.go:626: (dbg) Run:  kubectl --context addons-565438 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-565438 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-565438 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-arm64 -p addons-565438 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.688496345s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (59.65s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-565438 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-565438 --alsologtostderr -v=1: (1.504584457s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-68xsh" [ec452498-c5d6-494a-ae8f-a90eab5ef2f1] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-68xsh" [ec452498-c5d6-494a-ae8f-a90eab5ef2f1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-68xsh" [ec452498-c5d6-494a-ae8f-a90eab5ef2f1] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-68xsh" [ec452498-c5d6-494a-ae8f-a90eab5ef2f1] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00434227s
--- PASS: TestAddons/parallel/Headlamp (12.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7b4754d5d4-qbjdx" [2a817a28-877a-4037-ac34-e8dd057fc7be] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004698893s
addons_test.go:860: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-565438
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.71s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-565438 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-565438 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-565438 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [47002eac-142c-4cce-b16e-67f6e62c2e37] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [47002eac-142c-4cce-b16e-67f6e62c2e37] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [47002eac-142c-4cce-b16e-67f6e62c2e37] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004459287s
addons_test.go:891: (dbg) Run:  kubectl --context addons-565438 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 ssh "cat /opt/local-path-provisioner/pvc-a0ae79f4-863f-4a31-aca1-22c767dfc58a_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-565438 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-565438 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-arm64 -p addons-565438 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.71s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-784lh" [13521b17-0908-4b48-bc8e-8cc5352d8e8f] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006310335s
addons_test.go:955: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-565438
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-q24wb" [d933adc3-db08-4b49-9987-b64aea9491e2] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004280052s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-565438 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-565438 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.26s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-565438
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-565438: (10.971453647s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-565438
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-565438
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-565438
--- PASS: TestAddons/StoppedEnableDisable (11.26s)

                                                
                                    
x
+
TestCertOptions (39s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-690321 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-690321 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (36.28912388s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-690321 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-690321 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-690321 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-690321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-690321
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-690321: (2.035163664s)
--- PASS: TestCertOptions (39.00s)

                                                
                                    
x
+
TestCertExpiration (252.54s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-524079 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-524079 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (43.27650518s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-524079 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-524079 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (27.234901049s)
helpers_test.go:175: Cleaning up "cert-expiration-524079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-524079
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-524079: (2.0303098s)
--- PASS: TestCertExpiration (252.54s)

                                                
                                    
x
+
TestDockerFlags (45.31s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-126138 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-126138 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.332006271s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-126138 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-126138 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-126138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-126138
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-126138: (2.203340645s)
--- PASS: TestDockerFlags (45.31s)

                                                
                                    
x
+
TestForceSystemdFlag (47.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-631727 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0214 03:31:36.134003 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-631727 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (44.650682122s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-631727 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-631727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-631727
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-631727: (2.227454943s)
--- PASS: TestForceSystemdFlag (47.34s)

                                                
                                    
x
+
TestForceSystemdEnv (41.85s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-176582 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-176582 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.094331702s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-176582 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-176582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-176582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-176582: (2.285707734s)
--- PASS: TestForceSystemdEnv (41.85s)

                                                
                                    
x
+
TestErrorSpam/setup (37.2s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-328436 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-328436 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-328436 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-328436 --driver=docker  --container-runtime=docker: (37.196319443s)
--- PASS: TestErrorSpam/setup (37.20s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.02s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 status
--- PASS: TestErrorSpam/status (1.02s)

                                                
                                    
x
+
TestErrorSpam/pause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 pause
--- PASS: TestErrorSpam/pause (1.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (11.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 stop: (10.803068245s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-328436 --log_dir /tmp/nospam-328436 stop
--- PASS: TestErrorSpam/stop (11.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18165-1266022/.minikube/files/etc/test/nested/copy/1271380/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-094137 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-094137 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m22.270174601s)
--- PASS: TestFunctional/serial/StartWithProxy (82.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-094137 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-094137 --alsologtostderr -v=8: (34.973204211s)
functional_test.go:659: soft start took 34.975542176s for "functional-094137" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-094137 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.73s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-094137 /tmp/TestFunctionalserialCacheCmdcacheadd_local2510103816/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 cache add minikube-local-cache-test:functional-094137
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 cache delete minikube-local-cache-test:functional-094137
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-094137
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-094137 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (315.129429ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 kubectl -- --context functional-094137 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-094137 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-094137 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0214 03:05:49.099089 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:05:49.105608 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:05:49.115926 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:05:49.136238 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:05:49.176519 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:05:49.256832 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:05:49.417197 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:05:49.737816 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:05:50.378759 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:05:51.658987 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:05:54.219777 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:05:59.340283 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:06:09.580489 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-094137 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.991439961s)
functional_test.go:757: restart took 41.991548282s for "functional-094137" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-094137 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-094137 logs: (1.234993058s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 logs --file /tmp/TestFunctionalserialLogsFileCmd1115789307/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-094137 logs --file /tmp/TestFunctionalserialLogsFileCmd1115789307/001/logs.txt: (1.225858217s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-094137 apply -f testdata/invalidsvc.yaml
E0214 03:06:30.060999 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-094137
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-094137: exit status 115 (565.448174ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30104 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-094137 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-094137 config get cpus: exit status 14 (128.911525ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-094137 config get cpus: exit status 14 (99.295135ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-094137 --alsologtostderr -v=1]
E0214 03:07:11.021146 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-094137 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1311225: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-094137 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-094137 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (214.495695ms)

                                                
                                                
-- stdout --
	* [functional-094137] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18165
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 03:07:10.270769 1310780 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:07:10.270952 1310780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:07:10.270963 1310780 out.go:304] Setting ErrFile to fd 2...
	I0214 03:07:10.270970 1310780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:07:10.271254 1310780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
	I0214 03:07:10.271734 1310780 out.go:298] Setting JSON to false
	I0214 03:07:10.274488 1310780 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20976,"bootTime":1707859055,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 03:07:10.274592 1310780 start.go:138] virtualization:  
	I0214 03:07:10.279431 1310780 out.go:177] * [functional-094137] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I0214 03:07:10.281797 1310780 out.go:177]   - MINIKUBE_LOCATION=18165
	I0214 03:07:10.281976 1310780 notify.go:220] Checking for updates...
	I0214 03:07:10.284821 1310780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 03:07:10.287226 1310780 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	I0214 03:07:10.289471 1310780 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	I0214 03:07:10.291738 1310780 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 03:07:10.293828 1310780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 03:07:10.296649 1310780 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0214 03:07:10.297280 1310780 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 03:07:10.318557 1310780 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 03:07:10.318676 1310780 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:07:10.406789 1310780 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-14 03:07:10.396577007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:07:10.406901 1310780 docker.go:295] overlay module found
	I0214 03:07:10.409614 1310780 out.go:177] * Using the docker driver based on existing profile
	I0214 03:07:10.411716 1310780 start.go:298] selected driver: docker
	I0214 03:07:10.411733 1310780 start.go:902] validating driver "docker" against &{Name:functional-094137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-094137 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:07:10.411836 1310780 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 03:07:10.414430 1310780 out.go:177] 
	W0214 03:07:10.416447 1310780 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0214 03:07:10.418357 1310780 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-094137 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-094137 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-094137 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (236.707569ms)

                                                
                                                
-- stdout --
	* [functional-094137] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18165
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 03:07:10.058341 1310742 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:07:10.058622 1310742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:07:10.058652 1310742 out.go:304] Setting ErrFile to fd 2...
	I0214 03:07:10.058671 1310742 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:07:10.059990 1310742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
	I0214 03:07:10.060448 1310742 out.go:298] Setting JSON to false
	I0214 03:07:10.061564 1310742 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20975,"bootTime":1707859055,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I0214 03:07:10.061648 1310742 start.go:138] virtualization:  
	I0214 03:07:10.064777 1310742 out.go:177] * [functional-094137] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I0214 03:07:10.067037 1310742 out.go:177]   - MINIKUBE_LOCATION=18165
	I0214 03:07:10.069309 1310742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0214 03:07:10.067136 1310742 notify.go:220] Checking for updates...
	I0214 03:07:10.073347 1310742 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	I0214 03:07:10.075388 1310742 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	I0214 03:07:10.077368 1310742 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0214 03:07:10.079238 1310742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0214 03:07:10.081535 1310742 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0214 03:07:10.082153 1310742 driver.go:392] Setting default libvirt URI to qemu:///system
	I0214 03:07:10.108307 1310742 docker.go:122] docker version: linux-25.0.3:Docker Engine - Community
	I0214 03:07:10.108450 1310742 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:07:10.192324 1310742 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:57 SystemTime:2024-02-14 03:07:10.176805542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:07:10.192431 1310742 docker.go:295] overlay module found
	I0214 03:07:10.195835 1310742 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0214 03:07:10.197716 1310742 start.go:298] selected driver: docker
	I0214 03:07:10.197736 1310742 start.go:902] validating driver "docker" against &{Name:functional-094137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-094137 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0214 03:07:10.197848 1310742 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0214 03:07:10.200665 1310742 out.go:177] 
	W0214 03:07:10.202766 1310742 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0214 03:07:10.205253 1310742 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-094137 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-094137 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-cfx4k" [465ae68c-4e3f-4df0-a214-ab52335440af] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-cfx4k" [465ae68c-4e3f-4df0-a214-ab52335440af] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.005517987s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32632
functional_test.go:1671: http://192.168.49.2:32632: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-cfx4k

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32632
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [74022ba6-38d9-4fcf-9381-1a9b19727316] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004947107s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-094137 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-094137 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-094137 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-094137 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1789221f-254c-47cb-a0d7-8a971c01235e] Pending
helpers_test.go:344: "sp-pod" [1789221f-254c-47cb-a0d7-8a971c01235e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1789221f-254c-47cb-a0d7-8a971c01235e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003725484s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-094137 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-094137 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-094137 delete -f testdata/storage-provisioner/pod.yaml: (1.105823783s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-094137 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b85ae69c-7d7d-4c40-b6f9-8f41b695e196] Pending
helpers_test.go:344: "sp-pod" [b85ae69c-7d7d-4c40-b6f9-8f41b695e196] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b85ae69c-7d7d-4c40-b6f9-8f41b695e196] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003578696s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-094137 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.11s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh -n functional-094137 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 cp functional-094137:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1528446421/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh -n functional-094137 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh -n functional-094137 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1271380/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "sudo cat /etc/test/nested/copy/1271380/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1271380.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "sudo cat /etc/ssl/certs/1271380.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1271380.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "sudo cat /usr/share/ca-certificates/1271380.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/12713802.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "sudo cat /etc/ssl/certs/12713802.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/12713802.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "sudo cat /usr/share/ca-certificates/12713802.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-094137 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-094137 ssh "sudo systemctl is-active crio": exit status 1 (381.015207ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-094137 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-094137 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-094137 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-094137 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1307850: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-094137 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-094137 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6669f7d4-4a4a-4869-a3f0-21867308f0fa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6669f7d4-4a4a-4869-a3f0-21867308f0fa] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003445538s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-094137 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.29.126 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-094137 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-094137 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-094137 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-p2z4b" [15e87563-42e8-4fad-be32-b18b9aef90e3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-p2z4b" [15e87563-42e8-4fad-be32-b18b9aef90e3] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005144332s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "314.719122ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "66.780092ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "336.246739ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "87.740445ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-094137 /tmp/TestFunctionalparallelMountCmdany-port1571289004/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1707880026101723148" to /tmp/TestFunctionalparallelMountCmdany-port1571289004/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1707880026101723148" to /tmp/TestFunctionalparallelMountCmdany-port1571289004/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1707880026101723148" to /tmp/TestFunctionalparallelMountCmdany-port1571289004/001/test-1707880026101723148
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 14 03:07 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 14 03:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 14 03:07 test-1707880026101723148
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh cat /mount-9p/test-1707880026101723148
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-094137 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [776b2080-57a9-403a-a4bb-0e656c229462] Pending
helpers_test.go:344: "busybox-mount" [776b2080-57a9-403a-a4bb-0e656c229462] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [776b2080-57a9-403a-a4bb-0e656c229462] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [776b2080-57a9-403a-a4bb-0e656c229462] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003838247s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-094137 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-094137 /tmp/TestFunctionalparallelMountCmdany-port1571289004/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 service list -o json
functional_test.go:1490: Took "591.171543ms" to run "out/minikube-linux-arm64 -p functional-094137 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31732
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31732
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-094137 /tmp/TestFunctionalparallelMountCmdspecific-port2917544561/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-094137 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (441.853716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-094137 /tmp/TestFunctionalparallelMountCmdspecific-port2917544561/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-094137 ssh "sudo umount -f /mount-9p": exit status 1 (364.725295ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-094137 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-094137 /tmp/TestFunctionalparallelMountCmdspecific-port2917544561/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-094137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2686883984/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-094137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2686883984/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-094137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2686883984/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-094137 ssh "findmnt -T" /mount1: (1.205382224s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-094137 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-094137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2686883984/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-094137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2686883984/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-094137 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2686883984/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-094137 version -o=json --components: (1.078518441s)
--- PASS: TestFunctional/parallel/Version/components (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-094137 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-094137
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-094137
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-094137 image ls --format short --alsologtostderr:
I0214 03:07:37.665408 1313873 out.go:291] Setting OutFile to fd 1 ...
I0214 03:07:37.665572 1313873 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:07:37.665582 1313873 out.go:304] Setting ErrFile to fd 2...
I0214 03:07:37.665587 1313873 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:07:37.665831 1313873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
I0214 03:07:37.666531 1313873 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0214 03:07:37.666663 1313873 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0214 03:07:37.667193 1313873 cli_runner.go:164] Run: docker container inspect functional-094137 --format={{.State.Status}}
I0214 03:07:37.694539 1313873 ssh_runner.go:195] Run: systemctl --version
I0214 03:07:37.694607 1313873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-094137
I0214 03:07:37.714572 1313873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34064 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/functional-094137/id_rsa Username:docker}
I0214 03:07:37.808078 1313873 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-094137 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-094137 | 5bf6625ef7867 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| docker.io/library/nginx                     | latest            | 11deb55301007 | 192MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| docker.io/library/nginx                     | alpine            | d315ef79be32c | 43.5MB |
| gcr.io/google-containers/addon-resizer      | functional-094137 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-094137 image ls --format table --alsologtostderr:
I0214 03:07:38.241533 1314003 out.go:291] Setting OutFile to fd 1 ...
I0214 03:07:38.241814 1314003 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:07:38.241840 1314003 out.go:304] Setting ErrFile to fd 2...
I0214 03:07:38.241858 1314003 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:07:38.242136 1314003 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
I0214 03:07:38.242852 1314003 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0214 03:07:38.243038 1314003 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0214 03:07:38.243561 1314003 cli_runner.go:164] Run: docker container inspect functional-094137 --format={{.State.Status}}
I0214 03:07:38.272215 1314003 ssh_runner.go:195] Run: systemctl --version
I0214 03:07:38.272274 1314003 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-094137
I0214 03:07:38.300984 1314003 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34064 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/functional-094137/id_rsa Username:docker}
I0214 03:07:38.396272 1314003 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-094137 image ls --format json --alsologtostderr:
[{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"5bf6625ef7867c4af1552f8bba512ce29fcea30af8233e064322fb9bcf9951e1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-094137"],"size":"30"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"d315ef79be32cd8ae44f1
53a41c42e5e407c04f959074ddb8acc2c26649e2676","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43500000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"11deb55301007d6bf1db2ce20cb5d12e447541969990af4a03e2af8141ebdbed","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kub
ernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-094137"],"size":"32900000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pau
se:3.1"],"size":"525000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-094137 image ls --format json --alsologtostderr:
I0214 03:07:37.970255 1313932 out.go:291] Setting OutFile to fd 1 ...
I0214 03:07:37.971437 1313932 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:07:37.971481 1313932 out.go:304] Setting ErrFile to fd 2...
I0214 03:07:37.971504 1313932 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:07:37.971840 1313932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
I0214 03:07:37.972554 1313932 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0214 03:07:37.972729 1313932 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0214 03:07:37.973314 1313932 cli_runner.go:164] Run: docker container inspect functional-094137 --format={{.State.Status}}
I0214 03:07:37.993993 1313932 ssh_runner.go:195] Run: systemctl --version
I0214 03:07:37.994047 1313932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-094137
I0214 03:07:38.035811 1313932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34064 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/functional-094137/id_rsa Username:docker}
I0214 03:07:38.140679 1313932 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-094137 image ls --format yaml --alsologtostderr:
- id: 5bf6625ef7867c4af1552f8bba512ce29fcea30af8233e064322fb9bcf9951e1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-094137
size: "30"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 11deb55301007d6bf1db2ce20cb5d12e447541969990af4a03e2af8141ebdbed
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: d315ef79be32cd8ae44f153a41c42e5e407c04f959074ddb8acc2c26649e2676
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-094137
size: "32900000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-094137 image ls --format yaml --alsologtostderr:
I0214 03:07:37.690042 1313874 out.go:291] Setting OutFile to fd 1 ...
I0214 03:07:37.690387 1313874 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:07:37.690421 1313874 out.go:304] Setting ErrFile to fd 2...
I0214 03:07:37.690441 1313874 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:07:37.691459 1313874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
I0214 03:07:37.692913 1313874 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0214 03:07:37.693329 1313874 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0214 03:07:37.693839 1313874 cli_runner.go:164] Run: docker container inspect functional-094137 --format={{.State.Status}}
I0214 03:07:37.725442 1313874 ssh_runner.go:195] Run: systemctl --version
I0214 03:07:37.725507 1313874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-094137
I0214 03:07:37.746039 1313874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34064 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/functional-094137/id_rsa Username:docker}
I0214 03:07:37.848276 1313874 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-094137 ssh pgrep buildkitd: exit status 1 (364.026507ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image build -t localhost/my-image:functional-094137 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-094137 image build -t localhost/my-image:functional-094137 testdata/build --alsologtostderr: (2.003213984s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-094137 image build -t localhost/my-image:functional-094137 testdata/build --alsologtostderr:
I0214 03:07:38.298646 1314008 out.go:291] Setting OutFile to fd 1 ...
I0214 03:07:38.299315 1314008 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:07:38.299332 1314008 out.go:304] Setting ErrFile to fd 2...
I0214 03:07:38.299340 1314008 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0214 03:07:38.299630 1314008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
I0214 03:07:38.300344 1314008 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0214 03:07:38.302560 1314008 config.go:182] Loaded profile config "functional-094137": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0214 03:07:38.303124 1314008 cli_runner.go:164] Run: docker container inspect functional-094137 --format={{.State.Status}}
I0214 03:07:38.326667 1314008 ssh_runner.go:195] Run: systemctl --version
I0214 03:07:38.326727 1314008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-094137
I0214 03:07:38.344920 1314008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34064 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/functional-094137/id_rsa Username:docker}
I0214 03:07:38.440271 1314008 build_images.go:151] Building image from path: /tmp/build.340961376.tar
I0214 03:07:38.440335 1314008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0214 03:07:38.449634 1314008 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.340961376.tar
I0214 03:07:38.453116 1314008 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.340961376.tar: stat -c "%s %y" /var/lib/minikube/build/build.340961376.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.340961376.tar': No such file or directory
I0214 03:07:38.453147 1314008 ssh_runner.go:362] scp /tmp/build.340961376.tar --> /var/lib/minikube/build/build.340961376.tar (3072 bytes)
I0214 03:07:38.477203 1314008 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.340961376
I0214 03:07:38.486198 1314008 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.340961376 -xf /var/lib/minikube/build/build.340961376.tar
I0214 03:07:38.495175 1314008 docker.go:360] Building image: /var/lib/minikube/build/build.340961376
I0214 03:07:38.495246 1314008 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-094137 /var/lib/minikube/build/build.340961376
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.7s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:50629c9832250fccc4644a63775b9cec0e4cefb25ff418640c210ffde46b0901 done
#8 naming to localhost/my-image:functional-094137 done
#8 DONE 0.0s
I0214 03:07:40.173479 1314008 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-094137 /var/lib/minikube/build/build.340961376: (1.678204867s)
I0214 03:07:40.173559 1314008 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.340961376
I0214 03:07:40.182973 1314008 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.340961376.tar
I0214 03:07:40.194163 1314008 build_images.go:207] Built localhost/my-image:functional-094137 from /tmp/build.340961376.tar
I0214 03:07:40.194260 1314008 build_images.go:123] succeeded building to: functional-094137
I0214 03:07:40.194272 1314008 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.61365066s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-094137
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image load --daemon gcr.io/google-containers/addon-resizer:functional-094137 --alsologtostderr
2024/02/14 03:07:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-094137 image load --daemon gcr.io/google-containers/addon-resizer:functional-094137 --alsologtostderr: (3.700568543s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-094137 docker-env) && out/minikube-linux-arm64 status -p functional-094137"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-094137 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image load --daemon gcr.io/google-containers/addon-resizer:functional-094137 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-094137 image load --daemon gcr.io/google-containers/addon-resizer:functional-094137 --alsologtostderr: (3.216608868s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.365688349s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-094137
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image load --daemon gcr.io/google-containers/addon-resizer:functional-094137 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-094137 image load --daemon gcr.io/google-containers/addon-resizer:functional-094137 --alsologtostderr: (3.160959261s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image save gcr.io/google-containers/addon-resizer:functional-094137 /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image rm gcr.io/google-containers/addon-resizer:functional-094137 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-094137 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/addon-resizer-save.tar --alsologtostderr: (1.164075668s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-094137
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-094137 image save --daemon gcr.io/google-containers/addon-resizer:functional-094137 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-094137
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-094137
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-094137
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-094137
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-105003 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-105003 --driver=docker  --container-runtime=docker: (34.590551747s)
--- PASS: TestImageBuild/serial/Setup (34.59s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-105003
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-105003: (1.683028335s)
--- PASS: TestImageBuild/serial/NormalBuild (1.68s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-105003
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-105003
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-105003
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.70s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (84.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-642069 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0214 03:08:32.941314 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-642069 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (1m24.513610141s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (84.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.93s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-642069 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-642069 addons enable ingress --alsologtostderr -v=5: (10.92944559s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.93s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-642069 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (85.3s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-498551 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0214 03:11:16.781537 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:11:36.133938 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:36.139183 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:36.149436 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:36.170079 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:36.210319 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:36.290638 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:36.450992 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:36.771590 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:37.411915 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:38.692865 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:41.253444 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:46.374301 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:11:56.614495 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:12:17.094674 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-498551 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m25.293841143s)
--- PASS: TestJSONOutput/start/Command (85.30s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-498551 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-498551 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-498551 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-498551 --output=json --user=testUser: (5.748916093s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-955336 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-955336 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.552145ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b9f48154-3bdf-42db-bb49-8a025329a712","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-955336] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"847383e2-55dc-4a97-be8f-4f6704d74a78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18165"}}
	{"specversion":"1.0","id":"1f0ea47c-5853-4a52-a5be-bf2743e14009","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f60918cb-05d7-48e2-b3d4-b056b9f7ec2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig"}}
	{"specversion":"1.0","id":"c9774610-c779-440c-bb0b-717168d4a2b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube"}}
	{"specversion":"1.0","id":"19539a54-a5d3-48c7-bd5e-59e15cc09735","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"61ebcd41-8125-4164-89f1-258e27da77b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"20d71686-99f0-4482-94cf-a4a7e4edb9a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-955336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-955336
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-987493 --network=
E0214 03:12:58.055775 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-987493 --network=: (34.624348738s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-987493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-987493
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-987493: (2.019909828s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.67s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.26s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-179739 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-179739 --network=bridge: (31.253688539s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-179739" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-179739
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-179739: (1.98534028s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.26s)

                                                
                                    
x
+
TestKicExistingNetwork (31.52s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-856197 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-856197 --network=existing-network: (29.480418719s)
helpers_test.go:175: Cleaning up "existing-network-856197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-856197
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-856197: (1.90482387s)
--- PASS: TestKicExistingNetwork (31.52s)

                                                
                                    
x
+
TestKicCustomSubnet (35.43s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-045812 --subnet=192.168.60.0/24
E0214 03:14:19.976926 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-045812 --subnet=192.168.60.0/24: (33.330506917s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-045812 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-045812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-045812
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-045812: (2.073341462s)
--- PASS: TestKicCustomSubnet (35.43s)

                                                
                                    
x
+
TestKicStaticIP (35.1s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-845036 --static-ip=192.168.200.200
E0214 03:14:59.930273 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:14:59.935567 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:14:59.945948 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:14:59.966257 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:15:00.006600 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:15:00.086981 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:15:00.247580 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:15:00.568131 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:15:01.209237 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:15:02.490359 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:15:05.050534 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:15:10.171207 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:15:20.411897 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-845036 --static-ip=192.168.200.200: (32.940336544s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-845036 ip
helpers_test.go:175: Cleaning up "static-ip-845036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-845036
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-845036: (2.00674985s)
--- PASS: TestKicStaticIP (35.10s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-554862 --driver=docker  --container-runtime=docker
E0214 03:15:40.892134 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:15:49.096962 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-554862 --driver=docker  --container-runtime=docker: (29.132348215s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-557856 --driver=docker  --container-runtime=docker
E0214 03:16:21.852789 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-557856 --driver=docker  --container-runtime=docker: (33.789649783s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-554862
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-557856
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-557856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-557856
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-557856: (2.146378043s)
helpers_test.go:175: Cleaning up "first-554862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-554862
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-554862: (2.106622288s)
--- PASS: TestMinikubeProfile (68.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-264680 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0214 03:16:36.133681 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-264680 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.411380944s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-264680 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-266944 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-266944 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.496113415s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-266944 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-264680 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-264680 --alsologtostderr -v=5: (1.454290837s)
--- PASS: TestMountStart/serial/DeleteFirst (1.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-266944 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-266944
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-266944: (1.204930552s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (11.06s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-266944
E0214 03:17:03.817151 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-266944: (10.061851941s)
--- PASS: TestMountStart/serial/RestartStopped (11.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-266944 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (79.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-186271 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0214 03:17:43.773644 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-arm64 start -p multinode-186271 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m18.479102506s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (79.02s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (42.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-186271 -- rollout status deployment/busybox: (3.040755819s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:530: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:521: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- exec busybox-5b5d89c9d6-7hg5r -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- exec busybox-5b5d89c9d6-scljw -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- exec busybox-5b5d89c9d6-7hg5r -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- exec busybox-5b5d89c9d6-scljw -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- exec busybox-5b5d89c9d6-7hg5r -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- exec busybox-5b5d89c9d6-scljw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (42.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- exec busybox-5b5d89c9d6-7hg5r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- exec busybox-5b5d89c9d6-7hg5r -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- exec busybox-5b5d89c9d6-scljw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-186271 -- exec busybox-5b5d89c9d6-scljw -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.07s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-186271 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-186271 -v 3 --alsologtostderr: (19.792018592s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-186271 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp testdata/cp-test.txt multinode-186271:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp multinode-186271:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1832046779/001/cp-test_multinode-186271.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp multinode-186271:/home/docker/cp-test.txt multinode-186271-m02:/home/docker/cp-test_multinode-186271_multinode-186271-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m02 "sudo cat /home/docker/cp-test_multinode-186271_multinode-186271-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp multinode-186271:/home/docker/cp-test.txt multinode-186271-m03:/home/docker/cp-test_multinode-186271_multinode-186271-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m03 "sudo cat /home/docker/cp-test_multinode-186271_multinode-186271-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp testdata/cp-test.txt multinode-186271-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp multinode-186271-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1832046779/001/cp-test_multinode-186271-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp multinode-186271-m02:/home/docker/cp-test.txt multinode-186271:/home/docker/cp-test_multinode-186271-m02_multinode-186271.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271 "sudo cat /home/docker/cp-test_multinode-186271-m02_multinode-186271.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp multinode-186271-m02:/home/docker/cp-test.txt multinode-186271-m03:/home/docker/cp-test_multinode-186271-m02_multinode-186271-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m03 "sudo cat /home/docker/cp-test_multinode-186271-m02_multinode-186271-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp testdata/cp-test.txt multinode-186271-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp multinode-186271-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1832046779/001/cp-test_multinode-186271-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp multinode-186271-m03:/home/docker/cp-test.txt multinode-186271:/home/docker/cp-test_multinode-186271-m03_multinode-186271.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271 "sudo cat /home/docker/cp-test_multinode-186271-m03_multinode-186271.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 cp multinode-186271-m03:/home/docker/cp-test.txt multinode-186271-m02:/home/docker/cp-test_multinode-186271-m03_multinode-186271-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 ssh -n multinode-186271-m02 "sudo cat /home/docker/cp-test_multinode-186271-m03_multinode-186271-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-arm64 -p multinode-186271 node stop m03: (1.227322001s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-186271 status: exit status 7 (523.610141ms)

                                                
                                                
-- stdout --
	multinode-186271
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-186271-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-186271-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-186271 status --alsologtostderr: exit status 7 (511.943661ms)

                                                
                                                
-- stdout --
	multinode-186271
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-186271-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-186271-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 03:19:43.600294 1378458 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:19:43.600450 1378458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:19:43.600460 1378458 out.go:304] Setting ErrFile to fd 2...
	I0214 03:19:43.600466 1378458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:19:43.600743 1378458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
	I0214 03:19:43.600989 1378458 out.go:298] Setting JSON to false
	I0214 03:19:43.601038 1378458 mustload.go:65] Loading cluster: multinode-186271
	I0214 03:19:43.601125 1378458 notify.go:220] Checking for updates...
	I0214 03:19:43.601539 1378458 config.go:182] Loaded profile config "multinode-186271": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0214 03:19:43.601559 1378458 status.go:255] checking status of multinode-186271 ...
	I0214 03:19:43.602208 1378458 cli_runner.go:164] Run: docker container inspect multinode-186271 --format={{.State.Status}}
	I0214 03:19:43.620018 1378458 status.go:330] multinode-186271 host status = "Running" (err=<nil>)
	I0214 03:19:43.620041 1378458 host.go:66] Checking if "multinode-186271" exists ...
	I0214 03:19:43.620387 1378458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-186271
	I0214 03:19:43.636751 1378458 host.go:66] Checking if "multinode-186271" exists ...
	I0214 03:19:43.637084 1378458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 03:19:43.637138 1378458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186271
	I0214 03:19:43.653091 1378458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34134 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/multinode-186271/id_rsa Username:docker}
	I0214 03:19:43.748896 1378458 ssh_runner.go:195] Run: systemctl --version
	I0214 03:19:43.753105 1378458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:19:43.764455 1378458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0214 03:19:43.834141 1378458 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:66 SystemTime:2024-02-14 03:19:43.824723138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1053-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:25.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae07eda36dd25f8a1b98dfbf587313b99c0190bb Expected:ae07eda36dd25f8a1b98dfbf587313b99c0190bb} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.12.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.24.5]] Warnings:<nil>}}
	I0214 03:19:43.834853 1378458 kubeconfig.go:92] found "multinode-186271" server: "https://192.168.58.2:8443"
	I0214 03:19:43.834875 1378458 api_server.go:166] Checking apiserver status ...
	I0214 03:19:43.834920 1378458 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0214 03:19:43.846553 1378458 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2053/cgroup
	I0214 03:19:43.856021 1378458 api_server.go:182] apiserver freezer: "5:freezer:/docker/cbd8c77ca6ee22b919dc6c070b68efbe85e4792ee494715de3c15c6d458ee467/kubepods/burstable/pod41539723fed31756e7cb7b802d025822/42340a284460add90d8b0bd7d944a3caf275d8aac862ebd32f0d5f0d78707b4c"
	I0214 03:19:43.856096 1378458 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cbd8c77ca6ee22b919dc6c070b68efbe85e4792ee494715de3c15c6d458ee467/kubepods/burstable/pod41539723fed31756e7cb7b802d025822/42340a284460add90d8b0bd7d944a3caf275d8aac862ebd32f0d5f0d78707b4c/freezer.state
	I0214 03:19:43.864411 1378458 api_server.go:204] freezer state: "THAWED"
	I0214 03:19:43.864445 1378458 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0214 03:19:43.873133 1378458 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0214 03:19:43.873160 1378458 status.go:421] multinode-186271 apiserver status = Running (err=<nil>)
	I0214 03:19:43.873170 1378458 status.go:257] multinode-186271 status: &{Name:multinode-186271 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 03:19:43.873240 1378458 status.go:255] checking status of multinode-186271-m02 ...
	I0214 03:19:43.873553 1378458 cli_runner.go:164] Run: docker container inspect multinode-186271-m02 --format={{.State.Status}}
	I0214 03:19:43.889077 1378458 status.go:330] multinode-186271-m02 host status = "Running" (err=<nil>)
	I0214 03:19:43.889101 1378458 host.go:66] Checking if "multinode-186271-m02" exists ...
	I0214 03:19:43.889394 1378458 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-186271-m02
	I0214 03:19:43.905163 1378458 host.go:66] Checking if "multinode-186271-m02" exists ...
	I0214 03:19:43.905461 1378458 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0214 03:19:43.905505 1378458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-186271-m02
	I0214 03:19:43.922001 1378458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34139 SSHKeyPath:/home/jenkins/minikube-integration/18165-1266022/.minikube/machines/multinode-186271-m02/id_rsa Username:docker}
	I0214 03:19:44.013165 1378458 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0214 03:19:44.025537 1378458 status.go:257] multinode-186271-m02 status: &{Name:multinode-186271-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0214 03:19:44.025573 1378458 status.go:255] checking status of multinode-186271-m03 ...
	I0214 03:19:44.025953 1378458 cli_runner.go:164] Run: docker container inspect multinode-186271-m03 --format={{.State.Status}}
	I0214 03:19:44.042370 1378458 status.go:330] multinode-186271-m03 host status = "Stopped" (err=<nil>)
	I0214 03:19:44.042394 1378458 status.go:343] host is not running, skipping remaining checks
	I0214 03:19:44.042402 1378458 status.go:257] multinode-186271-m03 status: &{Name:multinode-186271-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:272: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-186271 node start m03 --alsologtostderr: (12.508620773s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (124.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-186271
multinode_test.go:318: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-186271
E0214 03:19:59.929838 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
multinode_test.go:318: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-186271: (22.537540958s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-186271 --wait=true -v=8 --alsologtostderr
E0214 03:20:27.614203 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:20:49.096949 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:21:36.134331 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-arm64 start -p multinode-186271 --wait=true -v=8 --alsologtostderr: (1m41.827607358s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-186271
--- PASS: TestMultiNode/serial/RestartKeepsNodes (124.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p multinode-186271 node delete m03: (4.414471511s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 status --alsologtostderr
multinode_test.go:442: (dbg) Run:  docker volume ls
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 stop
E0214 03:22:12.141870 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
multinode_test.go:342: (dbg) Done: out/minikube-linux-arm64 -p multinode-186271 stop: (21.445540885s)
multinode_test.go:348: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-186271 status: exit status 7 (99.447205ms)

                                                
                                                
-- stdout --
	multinode-186271
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-186271-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-186271 status --alsologtostderr: exit status 7 (90.958909ms)

                                                
                                                
-- stdout --
	multinode-186271
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-186271-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0214 03:22:28.612382 1394375 out.go:291] Setting OutFile to fd 1 ...
	I0214 03:22:28.612572 1394375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:22:28.612602 1394375 out.go:304] Setting ErrFile to fd 2...
	I0214 03:22:28.612622 1394375 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0214 03:22:28.612860 1394375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18165-1266022/.minikube/bin
	I0214 03:22:28.613055 1394375 out.go:298] Setting JSON to false
	I0214 03:22:28.613138 1394375 mustload.go:65] Loading cluster: multinode-186271
	I0214 03:22:28.613195 1394375 notify.go:220] Checking for updates...
	I0214 03:22:28.613570 1394375 config.go:182] Loaded profile config "multinode-186271": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0214 03:22:28.613589 1394375 status.go:255] checking status of multinode-186271 ...
	I0214 03:22:28.614059 1394375 cli_runner.go:164] Run: docker container inspect multinode-186271 --format={{.State.Status}}
	I0214 03:22:28.629360 1394375 status.go:330] multinode-186271 host status = "Stopped" (err=<nil>)
	I0214 03:22:28.629385 1394375 status.go:343] host is not running, skipping remaining checks
	I0214 03:22:28.629393 1394375 status.go:257] multinode-186271 status: &{Name:multinode-186271 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0214 03:22:28.629418 1394375 status.go:255] checking status of multinode-186271-m02 ...
	I0214 03:22:28.629714 1394375 cli_runner.go:164] Run: docker container inspect multinode-186271-m02 --format={{.State.Status}}
	I0214 03:22:28.646591 1394375 status.go:330] multinode-186271-m02 host status = "Stopped" (err=<nil>)
	I0214 03:22:28.646609 1394375 status.go:343] host is not running, skipping remaining checks
	I0214 03:22:28.646616 1394375 status.go:257] multinode-186271-m02 status: &{Name:multinode-186271-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (84.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:372: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-186271 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:382: (dbg) Done: out/minikube-linux-arm64 start -p multinode-186271 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.568522129s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-arm64 -p multinode-186271 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (84.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-186271
multinode_test.go:480: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-186271-m02 --driver=docker  --container-runtime=docker
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-186271-m02 --driver=docker  --container-runtime=docker: exit status 14 (107.309222ms)

                                                
                                                
-- stdout --
	* [multinode-186271-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18165
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-186271-m02' is duplicated with machine name 'multinode-186271-m02' in profile 'multinode-186271'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-186271-m03 --driver=docker  --container-runtime=docker
multinode_test.go:488: (dbg) Done: out/minikube-linux-arm64 start -p multinode-186271-m03 --driver=docker  --container-runtime=docker: (33.39393327s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-186271
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-186271: exit status 80 (354.763756ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-186271
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-186271-m03 already exists in multinode-186271-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-186271-m03
multinode_test.go:500: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-186271-m03: (2.135954449s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.06s)

                                                
                                    
x
+
TestPreload (143.5s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-816804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0214 03:24:59.930613 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-816804 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m8.985345555s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-816804 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-816804 image pull gcr.io/k8s-minikube/busybox: (1.367332271s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-816804
E0214 03:25:49.097307 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-816804: (10.846247188s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-816804 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
E0214 03:26:36.134038 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-816804 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (59.843786801s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-816804 image list
helpers_test.go:175: Cleaning up "test-preload-816804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-816804
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-816804: (2.217310177s)
--- PASS: TestPreload (143.50s)

                                                
                                    
x
+
TestSkaffold (122.86s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1164340799 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-344363 --memory=2600 --driver=docker  --container-runtime=docker
E0214 03:27:59.178198 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-344363 --memory=2600 --driver=docker  --container-runtime=docker: (32.556257466s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1164340799 run --minikube-profile skaffold-344363 --kube-context skaffold-344363 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1164340799 run --minikube-profile skaffold-344363 --kube-context skaffold-344363 --status-check=true --port-forward=false --interactive=false: (1m12.964771345s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-fff7cc6d-mj2hv" [568fd338-cd40-48f3-9219-cddc6d0f2164] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003282651s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7c9fb6bf4-pmwqw" [7800eb40-8c58-4403-a418-def43a39df62] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003606092s
helpers_test.go:175: Cleaning up "skaffold-344363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-344363
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-344363: (2.936912738s)
--- PASS: TestSkaffold (122.86s)

                                                
                                    
x
+
TestInsufficientStorage (11.06s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-824540 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-824540 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.771429259s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f2d2dc81-97b2-446d-8576-2d3d75dd6113","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-824540] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"baa89957-5ff3-4b9c-a45d-49ffc259b43a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18165"}}
	{"specversion":"1.0","id":"903ef850-a700-47ec-ae09-9c79b1681374","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a97c2989-f0b7-48af-b787-db1a85a0bd55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig"}}
	{"specversion":"1.0","id":"a46f66fe-dcb9-45b9-9d05-e7fc82060f0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube"}}
	{"specversion":"1.0","id":"feca94d1-fb0a-40d4-9fd8-c53acfc46a77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"05908988-4fbc-4070-97f5-853c68684d44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a7b9e2fe-5930-4721-aec7-34173bf56ff6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c369d8af-d406-4e25-babc-9ef0955fab5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cd764142-2a4c-4a95-aead-c8e2053ba360","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"392a630a-181f-4acf-a669-33b0232bb2bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"af77d61b-e272-4095-a11a-6f20995f4453","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-824540 in cluster insufficient-storage-824540","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"81720925-6baf-4d88-b59e-6bf869936998","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.42-1704759386-17866 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"384a7a71-fccf-4578-aa12-d1119aeb2600","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e9ff4ea-40a1-44eb-bbb0-e6b8dbf34256","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-824540 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-824540 --output=json --layout=cluster: exit status 7 (287.015771ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-824540","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-824540","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 03:29:42.857600 1429348 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-824540" does not appear in /home/jenkins/minikube-integration/18165-1266022/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-824540 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-824540 --output=json --layout=cluster: exit status 7 (294.511937ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-824540","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-824540","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0214 03:29:43.155405 1429400 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-824540" does not appear in /home/jenkins/minikube-integration/18165-1266022/kubeconfig
	E0214 03:29:43.165766 1429400 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/insufficient-storage-824540/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-824540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-824540
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-824540: (1.708272509s)
--- PASS: TestInsufficientStorage (11.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (126s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2326721399 start -p running-upgrade-154610 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0214 03:34:19.857681 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:19.863045 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:19.873259 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:19.893477 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:19.933856 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:20.015147 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:20.175491 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:20.495818 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:21.136856 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:22.417536 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:24.978100 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:30.098288 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2326721399 start -p running-upgrade-154610 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m19.603200949s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-154610 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0214 03:34:40.339469 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:34:59.930672 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:35:00.819760 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-154610 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.702373734s)
helpers_test.go:175: Cleaning up "running-upgrade-154610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-154610
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-154610: (2.225028449s)
--- PASS: TestRunningBinaryUpgrade (126.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (415.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-774839 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0214 03:36:36.135825 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:37:03.700854 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-774839 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m12.432447959s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-774839
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-774839: (4.396423734s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-774839 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-774839 status --format={{.Host}}: exit status 7 (112.749407ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-774839 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-774839 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m48.635119744s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-774839 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-774839 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-774839 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=docker: exit status 106 (112.838964ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-774839] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18165
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-774839
	    minikube start -p kubernetes-upgrade-774839 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7748392 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-774839 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-774839 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-774839 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (46.858526433s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-774839" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-774839
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-774839: (2.572550813s)
--- PASS: TestKubernetesUpgrade (415.24s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.89s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1754859230 start -p missing-upgrade-359268 --memory=2200 --driver=docker  --container-runtime=docker
E0214 03:35:41.780655 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:35:49.096957 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1754859230 start -p missing-upgrade-359268 --memory=2200 --driver=docker  --container-runtime=docker: (36.936819358s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-359268
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-359268: (10.477996924s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-359268
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-359268 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-359268 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m8.816146435s)
helpers_test.go:175: Cleaning up "missing-upgrade-359268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-359268
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-359268: (2.279807876s)
--- PASS: TestMissingContainerUpgrade (119.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-240472 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-240472 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (166.472064ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-240472] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18165
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18165-1266022/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18165-1266022/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-240472 --driver=docker  --container-runtime=docker
E0214 03:29:59.930671 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-240472 --driver=docker  --container-runtime=docker: (43.211134513s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-240472 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-240472 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-240472 --no-kubernetes --driver=docker  --container-runtime=docker: (14.521172484s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-240472 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-240472 status -o json: exit status 2 (308.145814ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-240472","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-240472
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-240472: (1.805658898s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-240472 --no-kubernetes --driver=docker  --container-runtime=docker
E0214 03:30:49.097221 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-240472 --no-kubernetes --driver=docker  --container-runtime=docker: (9.960117588s)
--- PASS: TestNoKubernetes/serial/Start (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-240472 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-240472 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.790609ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-240472
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-240472: (1.229823445s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-240472 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-240472 --driver=docker  --container-runtime=docker: (7.409595212s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-240472 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-240472 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.156382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3907858228 start -p stopped-upgrade-902944 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3907858228 start -p stopped-upgrade-902944 --memory=2200 --vm-driver=docker  --container-runtime=docker: (41.990683322s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3907858228 -p stopped-upgrade-902944 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3907858228 -p stopped-upgrade-902944 stop: (10.890035368s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-902944 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-902944 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.290804187s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-902944
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-902944: (1.5464429s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.55s)

                                                
                                    
x
+
TestPause/serial/Start (50.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-679690 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0214 03:38:52.143046 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:39:19.857534 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-679690 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (50.645588703s)
--- PASS: TestPause/serial/Start (50.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.54s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-679690 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0214 03:39:47.542148 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:39:59.929706 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-679690 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (39.519318302s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.54s)

                                                
                                    
x
+
TestPause/serial/Pause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-679690 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.62s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-679690 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-679690 --output=json --layout=cluster: exit status 2 (323.731324ms)

                                                
                                                
-- stdout --
	{"Name":"pause-679690","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-679690","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-679690 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.56s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-679690 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.24s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-679690 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-679690 --alsologtostderr -v=5: (2.239615043s)
--- PASS: TestPause/serial/DeletePaused (2.24s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-679690
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-679690: exit status 1 (14.786011ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-679690: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0214 03:40:49.096874 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:41:36.133725 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m29.342435771s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-877693 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-877693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5vkls" [d9e0c6c9-b6bb-42d8-85f5-470091276e77] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-5vkls" [d9e0c6c9-b6bb-42d8-85f5-470091276e77] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004310896s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-877693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m11.811910513s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (89.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m29.886620118s)
--- PASS: TestNetworkPlugins/group/calico/Start (89.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-f85dz" [44c30b8d-01e4-4334-a6ba-5d12940ad029] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004972743s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-877693 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-877693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qfcbz" [86aa1172-9e1d-4a57-80fd-94d7012a5727] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qfcbz" [86aa1172-9e1d-4a57-80fd-94d7012a5727] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004016082s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-877693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0214 03:44:39.179246 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m5.403226315s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-q5zqh" [6ade7e11-865f-470b-9445-b7934b7e06fa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005760359s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-877693 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-877693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r2v9d" [c76ffc75-0d16-4e16-aa2f-424ec7c74028] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0214 03:44:59.930448 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-r2v9d" [c76ffc75-0d16-4e16-aa2f-424ec7c74028] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.003680913s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-877693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (94.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m34.573865237s)
--- PASS: TestNetworkPlugins/group/false/Start (94.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-877693 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-877693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wpmq5" [6e6b4cac-3d34-4d16-b9ce-c4f53477b745] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wpmq5" [6e6b4cac-3d34-4d16-b9ce-c4f53477b745] Running
E0214 03:45:49.097028 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.0041616s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-877693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (84.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0214 03:46:36.134047 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:46:57.021152 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:46:57.026506 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:46:57.036813 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:46:57.057165 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:46:57.097426 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:46:57.177654 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:46:57.338015 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:46:57.659107 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:46:58.299719 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:46:59.579902 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:47:02.140278 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:47:07.261252 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m24.318320506s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (84.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-877693 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-877693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2fvqj" [e0e8d806-f835-4df1-86fc-a00f7bccd273] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-2fvqj" [e0e8d806-f835-4df1-86fc-a00f7bccd273] Running
E0214 03:47:17.501562 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.00449386s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-877693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m11.854804141s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-877693 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-877693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9cvk7" [c786849d-7bf4-47a4-8c01-3bc1c74a8b71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9cvk7" [c786849d-7bf4-47a4-8c01-3bc1c74a8b71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005718248s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-877693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (90.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0214 03:48:47.047428 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:48:47.052698 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:48:47.062929 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:48:47.083218 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:48:47.123497 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:48:47.203727 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:48:47.364601 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:48:47.684931 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:48:48.325175 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:48:49.605505 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:48:52.166080 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m30.298795979s)
--- PASS: TestNetworkPlugins/group/bridge/Start (90.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-jk4dg" [e7b056da-5080-4fca-a3e7-cc80b28c87f1] Running
E0214 03:48:57.287183 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004905129s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-877693 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-877693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qqn95" [e28f467f-b1e4-4457-9ef5-ac0c56c3ec52] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qqn95" [e28f467f-b1e4-4457-9ef5-ac0c56c3ec52] Running
E0214 03:49:07.527993 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003787453s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-877693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (52.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0214 03:49:40.862597 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:49:48.440507 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:49:48.445797 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:49:48.456002 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:49:48.476242 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:49:48.516499 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:49:48.596801 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:49:48.757403 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:49:49.077729 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:49:49.718685 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:49:50.999369 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-877693 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (52.429108257s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (52.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-877693 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-877693 replace --force -f testdata/netcat-deployment.yaml
E0214 03:49:53.559925 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rh8hh" [f54041ae-660c-455a-9d12-e87f707d9795] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0214 03:49:58.680123 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-rh8hh" [f54041ae-660c-455a-9d12-e87f707d9795] Running
E0214 03:49:59.930512 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.005416654s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-877693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-877693 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-877693 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dvptd" [b8cef92b-0858-43dd-9b8d-4c2a1bf119fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0214 03:50:29.400676 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-dvptd" [b8cef92b-0858-43dd-9b8d-4c2a1bf119fd] Running
E0214 03:50:39.793076 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:50:39.798314 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:50:39.808561 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:50:39.828816 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:50:39.869221 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:50:39.949674 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:50:40.110181 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:50:40.431385 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:50:41.071544 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:50:42.352401 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:50:42.902768 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.004104778s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (138.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-250819 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-250819 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (2m18.038843245s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (138.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-877693 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-877693 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.26s)
E0214 04:07:22.903203 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 04:07:33.367693 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:07:43.866963 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 04:07:48.476318 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-456178 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0214 03:51:10.361854 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:51:20.755210 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:51:30.891776 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:51:36.133463 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:51:57.020326 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:52:01.716215 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-456178 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (56.66379821s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-456178 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8ec1b29c-c45f-4e8f-97f1-aa238cc89efd] Pending
helpers_test.go:344: "busybox" [8ec1b29c-c45f-4e8f-97f1-aa238cc89efd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8ec1b29c-c45f-4e8f-97f1-aa238cc89efd] Running
E0214 03:52:10.914128 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:10.919545 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:10.929783 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:10.950040 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:10.990276 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:11.070585 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:11.230939 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:11.551562 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:12.192417 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:13.473023 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003996112s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-456178 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-456178 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-456178 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-456178 --alsologtostderr -v=3
E0214 03:52:16.033516 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:21.154168 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:24.703619 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-456178 --alsologtostderr -v=3: (10.934939429s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-456178 -n no-preload-456178
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-456178 -n no-preload-456178: exit status 7 (79.622239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-456178 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (317.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-456178 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0214 03:52:31.394999 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:52:32.282915 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:52:43.867132 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:52:43.872413 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:52:43.882718 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:52:43.903073 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:52:43.943382 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:52:44.023885 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:52:44.184279 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:52:44.505429 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:52:45.146395 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:52:46.426602 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-456178 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (5m16.892433915s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-456178 -n no-preload-456178
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (317.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-250819 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [845190fc-9578-4465-9bb5-a087454c98b9] Pending
E0214 03:52:48.986798 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
helpers_test.go:344: "busybox" [845190fc-9578-4465-9bb5-a087454c98b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0214 03:52:51.875181 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
helpers_test.go:344: "busybox" [845190fc-9578-4465-9bb5-a087454c98b9] Running
E0214 03:52:54.107412 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.002991977s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-250819 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-250819 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-250819 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-250819 --alsologtostderr -v=3
E0214 03:53:04.348254 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-250819 --alsologtostderr -v=3: (10.942995031s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-250819 -n old-k8s-version-250819
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-250819 -n old-k8s-version-250819: exit status 7 (80.793491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-250819 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (438.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-250819 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0
E0214 03:53:23.636748 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:53:24.828530 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:53:32.835944 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:53:47.047269 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:53:54.536429 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:53:54.541715 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:53:54.551969 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:53:54.572251 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:53:54.612583 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:53:54.692819 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:53:54.853221 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:53:55.173766 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:53:55.814252 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:53:57.094439 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:53:59.655114 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:54:04.775736 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:54:05.788987 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:54:14.732665 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:54:15.017416 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:54:19.857937 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:54:35.497661 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:54:48.440986 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:54:53.852408 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:54:53.857685 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:54:53.867947 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:54:53.888226 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:54:53.928628 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:54:54.015682 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:54:54.176745 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:54:54.497270 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:54:54.756822 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:54:55.138402 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:54:56.419162 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:54:58.979719 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:54:59.929720 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 03:55:04.100016 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:55:14.340899 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:55:16.123357 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:55:16.457916 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:55:27.709836 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:55:29.020390 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:29.025611 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:29.035903 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:29.056255 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:29.096610 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:29.176949 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:29.337219 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:29.657828 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:30.297968 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:31.578974 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:32.143570 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:55:34.140109 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:34.821485 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:55:39.261171 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:55:39.792827 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:55:49.096993 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 03:55:49.501480 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:56:07.477687 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 03:56:09.982503 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:56:15.781717 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:56:36.134071 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 03:56:38.378712 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:56:50.943623 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:56:57.020356 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 03:57:10.914667 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
E0214 03:57:37.702504 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 03:57:38.597325 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-250819 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.16.0: (7m17.836023389s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-250819 -n old-k8s-version-250819
E0214 04:00:29.020691 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (438.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vxvbt" [450c6d1c-0f1d-41fe-ba39-a8d0181df93e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0214 03:57:43.867479 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vxvbt" [450c6d1c-0f1d-41fe-ba39-a8d0181df93e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.007393756s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-vxvbt" [450c6d1c-0f1d-41fe-ba39-a8d0181df93e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004051874s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-456178 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-456178 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-456178 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-456178 -n no-preload-456178
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-456178 -n no-preload-456178: exit status 2 (334.327616ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-456178 -n no-preload-456178
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-456178 -n no-preload-456178: exit status 2 (357.204449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-456178 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-456178 -n no-preload-456178
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-456178 -n no-preload-456178
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-594198 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0214 03:58:11.550568 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 03:58:12.863809 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 03:58:47.047783 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 03:58:54.536420 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 03:59:19.858141 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 03:59:22.219647 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-594198 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (1m26.197811239s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-594198 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4ed91c07-a4d4-40bc-ae87-e43e5564e599] Pending
helpers_test.go:344: "busybox" [4ed91c07-a4d4-40bc-ae87-e43e5564e599] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4ed91c07-a4d4-40bc-ae87-e43e5564e599] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004416132s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-594198 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-594198 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-594198 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.070302877s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-594198 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-594198 --alsologtostderr -v=3
E0214 03:59:48.441003 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 03:59:53.852893 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-594198 --alsologtostderr -v=3: (11.001662076s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-594198 -n embed-certs-594198
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-594198 -n embed-certs-594198: exit status 7 (87.554709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-594198 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (320.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-594198 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0214 03:59:59.930491 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 04:00:21.543680 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-594198 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m20.124271788s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-594198 -n embed-certs-594198
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (320.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-zxqnx" [ed78b161-5303-4625-a90d-3eb3c8429cbe] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004339599s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-zxqnx" [ed78b161-5303-4625-a90d-3eb3c8429cbe] Running
E0214 04:00:39.793143 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010397788s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-250819 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-250819 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-250819 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-250819 -n old-k8s-version-250819
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-250819 -n old-k8s-version-250819: exit status 2 (319.945372ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-250819 -n old-k8s-version-250819
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-250819 -n old-k8s-version-250819: exit status 2 (343.724837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-250819 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-250819 -n old-k8s-version-250819
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-250819 -n old-k8s-version-250819
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-241079 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0214 04:00:49.096870 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 04:00:56.704467 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 04:01:19.179457 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 04:01:36.134004 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
E0214 04:01:57.021212 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 04:02:05.684740 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:05.690433 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:05.700668 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:05.720918 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:05.761146 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:05.841388 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:06.001687 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:06.322485 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:06.963368 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:08.243581 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:10.803823 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:10.915017 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-241079 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (1m27.498057352s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-241079 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7bf28d10-11ee-4da4-91fd-4673599989a4] Pending
helpers_test.go:344: "busybox" [7bf28d10-11ee-4da4-91fd-4673599989a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0214 04:02:15.924251 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
helpers_test.go:344: "busybox" [7bf28d10-11ee-4da4-91fd-4673599989a4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004700926s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-241079 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-241079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-241079 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.069274822s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-241079 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-241079 --alsologtostderr -v=3
E0214 04:02:26.165139 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-241079 --alsologtostderr -v=3: (10.951895753s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-241079 -n default-k8s-diff-port-241079
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-241079 -n default-k8s-diff-port-241079: exit status 7 (86.499126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-241079 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-241079 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4
E0214 04:02:43.867473 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/enable-default-cni-877693/client.crt: no such file or directory
E0214 04:02:46.645498 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:02:48.476074 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:02:48.481328 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:02:48.491590 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:02:48.511839 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:02:48.552091 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:02:48.632636 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:02:48.793061 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:02:49.114139 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:02:49.754349 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:02:51.034958 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:02:53.595947 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:02:58.716750 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:03:08.957268 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:03:20.064551 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 04:03:27.605761 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:03:29.438041 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:03:47.047252 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
E0214 04:03:54.536582 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/flannel-877693/client.crt: no such file or directory
E0214 04:04:10.398545 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
E0214 04:04:19.857834 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/skaffold-344363/client.crt: no such file or directory
E0214 04:04:42.976160 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 04:04:48.440875 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
E0214 04:04:49.526812 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
E0214 04:04:53.852228 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/bridge-877693/client.crt: no such file or directory
E0214 04:04:59.929832 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/ingress-addon-legacy-642069/client.crt: no such file or directory
E0214 04:05:10.093295 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kindnet-877693/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-241079 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.4: (5m40.636359449s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-241079 -n default-k8s-diff-port-241079
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (341.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-grs6f" [416c2fd3-d1f3-430e-92ec-e212da8fda84] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-grs6f" [416c2fd3-d1f3-430e-92ec-e212da8fda84] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.00419363s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-grs6f" [416c2fd3-d1f3-430e-92ec-e212da8fda84] Running
E0214 04:05:29.020267 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/kubenet-877693/client.crt: no such file or directory
E0214 04:05:32.319070 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004025091s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-594198 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-594198 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-594198 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-594198 -n embed-certs-594198
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-594198 -n embed-certs-594198: exit status 2 (354.045135ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-594198 -n embed-certs-594198
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-594198 -n embed-certs-594198: exit status 2 (363.939498ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-594198 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-594198 -n embed-certs-594198
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-594198 -n embed-certs-594198
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-568453 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0214 04:05:39.792885 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 04:05:49.096928 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/addons-565438/client.crt: no such file or directory
E0214 04:06:11.484186 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/calico-877693/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-568453 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (47.345225727s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-568453 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-568453 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.155262313s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-568453 --alsologtostderr -v=3
E0214 04:06:36.133575 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/functional-094137/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-568453 --alsologtostderr -v=3: (9.023715207s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-568453 -n newest-cni-568453
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-568453 -n newest-cni-568453: exit status 7 (87.647985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-568453 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (31.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-568453 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2
E0214 04:06:57.020222 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/auto-877693/client.crt: no such file or directory
E0214 04:07:02.838534 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/custom-flannel-877693/client.crt: no such file or directory
E0214 04:07:05.685390 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/no-preload-456178/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-568453 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.29.0-rc.2: (31.563616369s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-568453 -n newest-cni-568453
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (31.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-568453 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-568453 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-568453 -n newest-cni-568453
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-568453 -n newest-cni-568453: exit status 2 (334.038866ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-568453 -n newest-cni-568453
E0214 04:07:10.914684 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-568453 -n newest-cni-568453: exit status 2 (369.365446ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-568453 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-568453 -n newest-cni-568453
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-568453 -n newest-cni-568453
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2j69m" [5edb7fef-a538-42ea-ab8d-c9c2c9b324af] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0214 04:08:16.159891 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/old-k8s-version-250819/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2j69m" [5edb7fef-a538-42ea-ab8d-c9c2c9b324af] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.003988502s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (15.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2j69m" [5edb7fef-a538-42ea-ab8d-c9c2c9b324af] Running
E0214 04:08:33.957964 1271380 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/false-877693/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003869057s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-241079 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-241079 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-241079 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-241079 -n default-k8s-diff-port-241079
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-241079 -n default-k8s-diff-port-241079: exit status 2 (333.837361ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-241079 -n default-k8s-diff-port-241079
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-241079 -n default-k8s-diff-port-241079: exit status 2 (322.791127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-241079 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-241079 -n default-k8s-diff-port-241079
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-241079 -n default-k8s-diff-port-241079
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                    

Test skip (27/335)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.61s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-763270 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-763270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-763270
--- SKIP: TestDownloadOnlyKic (0.61s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:444: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-877693 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-877693" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18165-1266022/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 14 Feb 2024 03:30:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.67.2:8443
name: offline-docker-014016
contexts:
- context:
cluster: offline-docker-014016
extensions:
- extension:
last-update: Wed, 14 Feb 2024 03:30:38 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: offline-docker-014016
name: offline-docker-014016
current-context: offline-docker-014016
kind: Config
preferences: {}
users:
- name: offline-docker-014016
user:
client-certificate: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/offline-docker-014016/client.crt
client-key: /home/jenkins/minikube-integration/18165-1266022/.minikube/profiles/offline-docker-014016/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-877693

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-877693" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-877693"

                                                
                                                
----------------------- debugLogs end: cilium-877693 [took: 4.369615791s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-877693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-877693
--- SKIP: TestNetworkPlugins/group/cilium (4.53s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-829149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-829149
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard