Test Report: Docker_Linux_containerd_arm64 17581

                    
                      8f89b804228acd053c87abbbfb2e31f99595775c:2023-11-14:31875
                    
                

Test fail (8/308)

x
+
TestAddons/parallel/Ingress (38.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-135796 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-135796 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-135796 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0559d3fc-e44c-40e6-a719-881d9db1234c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0559d3fc-e44c-40e6-a719-881d9db1234c] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.022305204s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-135796 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.051332213s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-135796 addons disable ingress-dns --alsologtostderr -v=1: (1.070498892s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-135796 addons disable ingress --alsologtostderr -v=1: (7.855828811s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-135796
helpers_test.go:235: (dbg) docker inspect addons-135796:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "520abfaf274885fb4a7dd1a38f6f563e25337c1c06c53c612135baec14da1f0f",
	        "Created": "2023-11-14T13:35:04.43171203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1252905,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-14T13:35:04.782404958Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/520abfaf274885fb4a7dd1a38f6f563e25337c1c06c53c612135baec14da1f0f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/520abfaf274885fb4a7dd1a38f6f563e25337c1c06c53c612135baec14da1f0f/hostname",
	        "HostsPath": "/var/lib/docker/containers/520abfaf274885fb4a7dd1a38f6f563e25337c1c06c53c612135baec14da1f0f/hosts",
	        "LogPath": "/var/lib/docker/containers/520abfaf274885fb4a7dd1a38f6f563e25337c1c06c53c612135baec14da1f0f/520abfaf274885fb4a7dd1a38f6f563e25337c1c06c53c612135baec14da1f0f-json.log",
	        "Name": "/addons-135796",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-135796:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-135796",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cd66e3d16f96be676cab4dbd1776ef1fbc611cc9e60b1d5a3585d07be4ad8851-init/diff:/var/lib/docker/overlay2/64458dfae02165ba5e5b32269df54406638d6ee619cc4ae1d257dd52e6bbd2d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cd66e3d16f96be676cab4dbd1776ef1fbc611cc9e60b1d5a3585d07be4ad8851/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cd66e3d16f96be676cab4dbd1776ef1fbc611cc9e60b1d5a3585d07be4ad8851/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cd66e3d16f96be676cab4dbd1776ef1fbc611cc9e60b1d5a3585d07be4ad8851/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-135796",
	                "Source": "/var/lib/docker/volumes/addons-135796/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-135796",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-135796",
	                "name.minikube.sigs.k8s.io": "addons-135796",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "55370a2a156152b26bf6dccfa6da75cd336ab2a1b30e3bbad828b52feb1341e4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34332"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34331"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34328"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34330"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34329"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/55370a2a1561",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-135796": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "520abfaf2748",
	                        "addons-135796"
	                    ],
	                    "NetworkID": "a905f96f72efe6bbb1bccdeb1feb264e73280f10f0554ebccf9fad429d974226",
	                    "EndpointID": "1ed5bb28c75e563b97d8219bd091659d6b96cc4fa5626bae0bf4521f5c68c8e8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-135796 -n addons-135796
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-135796 logs -n 25: (1.648845427s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:34 UTC |
	| delete  | -p download-only-690510                                                                     | download-only-690510   | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:34 UTC |
	| delete  | -p download-only-690510                                                                     | download-only-690510   | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:34 UTC |
	| start   | --download-only -p                                                                          | download-docker-939008 | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |                     |
	|         | download-docker-939008                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p download-docker-939008                                                                   | download-docker-939008 | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-603085   | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |                     |
	|         | binary-mirror-603085                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46489                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-603085                                                                     | binary-mirror-603085   | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:34 UTC |
	| addons  | disable dashboard -p                                                                        | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |                     |
	|         | addons-135796                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |                     |
	|         | addons-135796                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-135796 --wait=true                                                                | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                                                              |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-135796 ip                                                                            | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:37 UTC |
	| addons  | addons-135796 addons disable                                                                | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:37 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:37 UTC |
	|         | -p addons-135796                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-135796 ssh cat                                                                       | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:37 UTC |
	|         | /opt/local-path-provisioner/pvc-70e8eceb-372f-4aa1-b268-d2d2a4471f68_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-135796 addons disable                                                                | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-135796 addons                                                                        | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-135796 addons                                                                        | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | -p addons-135796                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | addons-135796                                                                               |                        |         |         |                     |                     |
	| addons  | addons-135796 addons                                                                        | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | addons-135796                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-135796 ssh curl -s                                                                   | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-135796 ip                                                                            | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	| addons  | addons-135796 addons disable                                                                | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-135796 addons disable                                                                | addons-135796          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:39 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:34:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:34:56.505910 1252456 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:34:56.506092 1252456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:56.506102 1252456 out.go:309] Setting ErrFile to fd 2...
	I1114 13:34:56.506108 1252456 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:56.506378 1252456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
	I1114 13:34:56.506835 1252456 out.go:303] Setting JSON to false
	I1114 13:34:56.507705 1252456 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":37043,"bootTime":1699931854,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1114 13:34:56.507781 1252456 start.go:138] virtualization:  
	I1114 13:34:56.511056 1252456 out.go:177] * [addons-135796] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 13:34:56.512744 1252456 notify.go:220] Checking for updates...
	I1114 13:34:56.513812 1252456 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:34:56.515848 1252456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:34:56.517578 1252456 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	I1114 13:34:56.519122 1252456 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	I1114 13:34:56.520956 1252456 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 13:34:56.522689 1252456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:34:56.524570 1252456 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:34:56.547743 1252456 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:34:56.547850 1252456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:34:56.634797 1252456 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:50 SystemTime:2023-11-14 13:34:56.625164297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:34:56.634918 1252456 docker.go:295] overlay module found
	I1114 13:34:56.637131 1252456 out.go:177] * Using the docker driver based on user configuration
	I1114 13:34:56.639079 1252456 start.go:298] selected driver: docker
	I1114 13:34:56.639094 1252456 start.go:902] validating driver "docker" against <nil>
	I1114 13:34:56.639117 1252456 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:34:56.639799 1252456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:34:56.706226 1252456 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:38 OomKillDisable:true NGoroutines:50 SystemTime:2023-11-14 13:34:56.69686852 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:34:56.706391 1252456 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 13:34:56.706617 1252456 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 13:34:56.708318 1252456 out.go:177] * Using Docker driver with root privileges
	I1114 13:34:56.709919 1252456 cni.go:84] Creating CNI manager for ""
	I1114 13:34:56.709937 1252456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1114 13:34:56.709950 1252456 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1114 13:34:56.709966 1252456 start_flags.go:323] config:
	{Name:addons-135796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-135796 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:34:56.711786 1252456 out.go:177] * Starting control plane node addons-135796 in cluster addons-135796
	I1114 13:34:56.713280 1252456 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1114 13:34:56.714928 1252456 out.go:177] * Pulling base image ...
	I1114 13:34:56.716477 1252456 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1114 13:34:56.716527 1252456 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	I1114 13:34:56.716540 1252456 cache.go:56] Caching tarball of preloaded images
	I1114 13:34:56.716620 1252456 preload.go:174] Found /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1114 13:34:56.716636 1252456 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on containerd
	I1114 13:34:56.716914 1252456 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1114 13:34:56.717038 1252456 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/config.json ...
	I1114 13:34:56.717059 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/config.json: {Name:mk0f8c7da73f04cf653605c8f9522e3ea627be8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:56.734000 1252456 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1114 13:34:56.734024 1252456 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in daemon, skipping load
	I1114 13:34:56.734047 1252456 cache.go:194] Successfully downloaded all kic artifacts
	I1114 13:34:56.734105 1252456 start.go:365] acquiring machines lock for addons-135796: {Name:mk872a07e87f1e78c4a1e4b3d2b262b3ef056868 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 13:34:56.734905 1252456 start.go:369] acquired machines lock for "addons-135796" in 779.708µs
	I1114 13:34:56.734942 1252456 start.go:93] Provisioning new machine with config: &{Name:addons-135796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-135796 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1114 13:34:56.735019 1252456 start.go:125] createHost starting for "" (driver="docker")
	I1114 13:34:56.738014 1252456 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1114 13:34:56.738269 1252456 start.go:159] libmachine.API.Create for "addons-135796" (driver="docker")
	I1114 13:34:56.738324 1252456 client.go:168] LocalClient.Create starting
	I1114 13:34:56.738450 1252456 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem
	I1114 13:34:57.450208 1252456 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/cert.pem
	I1114 13:34:58.247533 1252456 cli_runner.go:164] Run: docker network inspect addons-135796 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1114 13:34:58.265659 1252456 cli_runner.go:211] docker network inspect addons-135796 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1114 13:34:58.265749 1252456 network_create.go:281] running [docker network inspect addons-135796] to gather additional debugging logs...
	I1114 13:34:58.265777 1252456 cli_runner.go:164] Run: docker network inspect addons-135796
	W1114 13:34:58.283143 1252456 cli_runner.go:211] docker network inspect addons-135796 returned with exit code 1
	I1114 13:34:58.283174 1252456 network_create.go:284] error running [docker network inspect addons-135796]: docker network inspect addons-135796: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-135796 not found
	I1114 13:34:58.283189 1252456 network_create.go:286] output of [docker network inspect addons-135796]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-135796 not found
	
	** /stderr **
	I1114 13:34:58.283298 1252456 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 13:34:58.301631 1252456 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400249d560}
	I1114 13:34:58.301687 1252456 network_create.go:124] attempt to create docker network addons-135796 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1114 13:34:58.301754 1252456 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-135796 addons-135796
	I1114 13:34:58.376441 1252456 network_create.go:108] docker network addons-135796 192.168.49.0/24 created
	I1114 13:34:58.376473 1252456 kic.go:121] calculated static IP "192.168.49.2" for the "addons-135796" container
	I1114 13:34:58.376546 1252456 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1114 13:34:58.394215 1252456 cli_runner.go:164] Run: docker volume create addons-135796 --label name.minikube.sigs.k8s.io=addons-135796 --label created_by.minikube.sigs.k8s.io=true
	I1114 13:34:58.415629 1252456 oci.go:103] Successfully created a docker volume addons-135796
	I1114 13:34:58.415734 1252456 cli_runner.go:164] Run: docker run --rm --name addons-135796-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-135796 --entrypoint /usr/bin/test -v addons-135796:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1114 13:34:59.814079 1252456 cli_runner.go:217] Completed: docker run --rm --name addons-135796-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-135796 --entrypoint /usr/bin/test -v addons-135796:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib: (1.398303092s)
	I1114 13:34:59.814115 1252456 oci.go:107] Successfully prepared a docker volume addons-135796
	I1114 13:34:59.814164 1252456 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1114 13:34:59.814189 1252456 kic.go:194] Starting extracting preloaded images to volume ...
	I1114 13:34:59.814273 1252456 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-135796:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1114 13:35:04.344528 1252456 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-135796:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.53021402s)
	I1114 13:35:04.344560 1252456 kic.go:203] duration metric: took 4.530367 seconds to extract preloaded images to volume
	W1114 13:35:04.344697 1252456 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1114 13:35:04.344843 1252456 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1114 13:35:04.415325 1252456 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-135796 --name addons-135796 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-135796 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-135796 --network addons-135796 --ip 192.168.49.2 --volume addons-135796:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1114 13:35:04.790625 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Running}}
	I1114 13:35:04.810770 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:04.841492 1252456 cli_runner.go:164] Run: docker exec addons-135796 stat /var/lib/dpkg/alternatives/iptables
	I1114 13:35:04.944961 1252456 oci.go:144] the created container "addons-135796" has a running status.
	I1114 13:35:04.944989 1252456 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa...
	I1114 13:35:05.263160 1252456 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1114 13:35:05.285397 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:05.311848 1252456 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1114 13:35:05.311870 1252456 kic_runner.go:114] Args: [docker exec --privileged addons-135796 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1114 13:35:05.410553 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:05.450359 1252456 machine.go:88] provisioning docker machine ...
	I1114 13:35:05.450395 1252456 ubuntu.go:169] provisioning hostname "addons-135796"
	I1114 13:35:05.450461 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:05.478162 1252456 main.go:141] libmachine: Using SSH client type: native
	I1114 13:35:05.478924 1252456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34332 <nil> <nil>}
	I1114 13:35:05.478946 1252456 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-135796 && echo "addons-135796" | sudo tee /etc/hostname
	I1114 13:35:05.479591 1252456 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1114 13:35:08.636384 1252456 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-135796
	
	I1114 13:35:08.636473 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:08.656257 1252456 main.go:141] libmachine: Using SSH client type: native
	I1114 13:35:08.656666 1252456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34332 <nil> <nil>}
	I1114 13:35:08.656689 1252456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-135796' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-135796/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-135796' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 13:35:08.802238 1252456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 13:35:08.802268 1252456 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17581-1246551/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-1246551/.minikube}
	I1114 13:35:08.802342 1252456 ubuntu.go:177] setting up certificates
	I1114 13:35:08.802354 1252456 provision.go:83] configureAuth start
	I1114 13:35:08.802434 1252456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-135796
	I1114 13:35:08.822509 1252456 provision.go:138] copyHostCerts
	I1114 13:35:08.822596 1252456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.pem (1078 bytes)
	I1114 13:35:08.822743 1252456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-1246551/.minikube/cert.pem (1123 bytes)
	I1114 13:35:08.822824 1252456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-1246551/.minikube/key.pem (1679 bytes)
	I1114 13:35:08.822882 1252456 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca-key.pem org=jenkins.addons-135796 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-135796]
	I1114 13:35:08.983545 1252456 provision.go:172] copyRemoteCerts
	I1114 13:35:08.983627 1252456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 13:35:08.983674 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:09.008732 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:09.113443 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1114 13:35:09.144841 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1114 13:35:09.175829 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 13:35:09.205392 1252456 provision.go:86] duration metric: configureAuth took 403.020487ms
	I1114 13:35:09.205431 1252456 ubuntu.go:193] setting minikube options for container-runtime
	I1114 13:35:09.205651 1252456 config.go:182] Loaded profile config "addons-135796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1114 13:35:09.205665 1252456 machine.go:91] provisioned docker machine in 3.755287011s
	I1114 13:35:09.205671 1252456 client.go:171] LocalClient.Create took 12.467338024s
	I1114 13:35:09.205688 1252456 start.go:167] duration metric: libmachine.API.Create for "addons-135796" took 12.467420994s
	I1114 13:35:09.205700 1252456 start.go:300] post-start starting for "addons-135796" (driver="docker")
	I1114 13:35:09.205709 1252456 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 13:35:09.205759 1252456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 13:35:09.205807 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:09.225424 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:09.328278 1252456 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 13:35:09.332644 1252456 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1114 13:35:09.332690 1252456 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1114 13:35:09.332707 1252456 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1114 13:35:09.332720 1252456 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1114 13:35:09.332734 1252456 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1246551/.minikube/addons for local assets ...
	I1114 13:35:09.332846 1252456 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1246551/.minikube/files for local assets ...
	I1114 13:35:09.332877 1252456 start.go:303] post-start completed in 127.171299ms
	I1114 13:35:09.333191 1252456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-135796
	I1114 13:35:09.351326 1252456 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/config.json ...
	I1114 13:35:09.351606 1252456 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 13:35:09.351655 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:09.369834 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:09.467433 1252456 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1114 13:35:09.473590 1252456 start.go:128] duration metric: createHost completed in 12.73855495s
	I1114 13:35:09.473615 1252456 start.go:83] releasing machines lock for "addons-135796", held for 12.738694215s
	I1114 13:35:09.473687 1252456 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-135796
	I1114 13:35:09.491938 1252456 ssh_runner.go:195] Run: cat /version.json
	I1114 13:35:09.492012 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:09.492377 1252456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 13:35:09.492446 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:09.522945 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:09.528934 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:09.621639 1252456 ssh_runner.go:195] Run: systemctl --version
	I1114 13:35:09.755772 1252456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 13:35:09.761480 1252456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1114 13:35:09.791309 1252456 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1114 13:35:09.791398 1252456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 13:35:09.824929 1252456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1114 13:35:09.824956 1252456 start.go:472] detecting cgroup driver to use...
	I1114 13:35:09.824997 1252456 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1114 13:35:09.825049 1252456 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1114 13:35:09.840370 1252456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 13:35:09.854453 1252456 docker.go:203] disabling cri-docker service (if available) ...
	I1114 13:35:09.854520 1252456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 13:35:09.871008 1252456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 13:35:09.887629 1252456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 13:35:10.021156 1252456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 13:35:10.137370 1252456 docker.go:219] disabling docker service ...
	I1114 13:35:10.137444 1252456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 13:35:10.161310 1252456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 13:35:10.178041 1252456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 13:35:10.275684 1252456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 13:35:10.378471 1252456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 13:35:10.392591 1252456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 13:35:10.415209 1252456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1114 13:35:10.428051 1252456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1114 13:35:10.441178 1252456 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1114 13:35:10.441271 1252456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1114 13:35:10.454927 1252456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 13:35:10.467511 1252456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1114 13:35:10.480103 1252456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 13:35:10.492736 1252456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 13:35:10.504910 1252456 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1114 13:35:10.517790 1252456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 13:35:10.528827 1252456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 13:35:10.539845 1252456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:35:10.635293 1252456 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1114 13:35:10.778872 1252456 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1114 13:35:10.779024 1252456 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1114 13:35:10.784098 1252456 start.go:540] Will wait 60s for crictl version
	I1114 13:35:10.784216 1252456 ssh_runner.go:195] Run: which crictl
	I1114 13:35:10.789097 1252456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 13:35:10.830566 1252456 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1114 13:35:10.830732 1252456 ssh_runner.go:195] Run: containerd --version
	I1114 13:35:10.862065 1252456 ssh_runner.go:195] Run: containerd --version
	I1114 13:35:10.897567 1252456 out.go:177] * Preparing Kubernetes v1.28.3 on containerd 1.6.24 ...
	I1114 13:35:10.899290 1252456 cli_runner.go:164] Run: docker network inspect addons-135796 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 13:35:10.918435 1252456 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1114 13:35:10.923401 1252456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:35:10.937780 1252456 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1114 13:35:10.937851 1252456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 13:35:10.977877 1252456 containerd.go:604] all images are preloaded for containerd runtime.
	I1114 13:35:10.977903 1252456 containerd.go:518] Images already preloaded, skipping extraction
	I1114 13:35:10.977961 1252456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 13:35:11.023136 1252456 containerd.go:604] all images are preloaded for containerd runtime.
	I1114 13:35:11.023163 1252456 cache_images.go:84] Images are preloaded, skipping loading
	I1114 13:35:11.023226 1252456 ssh_runner.go:195] Run: sudo crictl info
	I1114 13:35:11.063358 1252456 cni.go:84] Creating CNI manager for ""
	I1114 13:35:11.063384 1252456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1114 13:35:11.063444 1252456 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 13:35:11.063468 1252456 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-135796 NodeName:addons-135796 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 13:35:11.063603 1252456 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-135796"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 13:35:11.063684 1252456 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=addons-135796 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-135796 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 13:35:11.063754 1252456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 13:35:11.075402 1252456 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 13:35:11.075482 1252456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 13:35:11.086884 1252456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (385 bytes)
	I1114 13:35:11.110277 1252456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 13:35:11.133493 1252456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1114 13:35:11.155756 1252456 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1114 13:35:11.160369 1252456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:35:11.174305 1252456 certs.go:56] Setting up /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796 for IP: 192.168.49.2
	I1114 13:35:11.174351 1252456 certs.go:190] acquiring lock for shared ca certs: {Name:mk0ee92e20cab7092abbb9be784c32bf39215f61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:11.174816 1252456 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.key
	I1114 13:35:11.503170 1252456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt ...
	I1114 13:35:11.503203 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt: {Name:mka72a848e3164314d85406de1d6e2745542ea9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:11.503839 1252456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.key ...
	I1114 13:35:11.503857 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.key: {Name:mk4094f7d4b49995dc9a602691ebe0e569fba632 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:11.504313 1252456 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.key
	I1114 13:35:11.801059 1252456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.crt ...
	I1114 13:35:11.801094 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.crt: {Name:mka03b648ab94a5d4c39d4eb6de4b47c2760bc1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:11.801286 1252456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.key ...
	I1114 13:35:11.801300 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.key: {Name:mk608dcdf6c3a1864548facc645766801ed9c0f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:11.801776 1252456 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.key
	I1114 13:35:11.801795 1252456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt with IP's: []
	I1114 13:35:12.227035 1252456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt ...
	I1114 13:35:12.227070 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: {Name:mk0d82d2b81700a2471891501eb22c61c8623b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:12.227749 1252456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.key ...
	I1114 13:35:12.227766 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.key: {Name:mk2cec4b64abdb61f945331e4ba1d4f6c62f9590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:12.227859 1252456 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.key.dd3b5fb2
	I1114 13:35:12.227878 1252456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 13:35:12.737779 1252456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.crt.dd3b5fb2 ...
	I1114 13:35:12.737815 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.crt.dd3b5fb2: {Name:mk15840530508b5e023cb9e0d0f771d9959f943f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:12.738004 1252456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.key.dd3b5fb2 ...
	I1114 13:35:12.738019 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.key.dd3b5fb2: {Name:mk6a08be0b220f24fdfa9cf1176d54ae51d7d3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:12.738600 1252456 certs.go:337] copying /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.crt
	I1114 13:35:12.738689 1252456 certs.go:341] copying /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.key
	I1114 13:35:12.738757 1252456 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/proxy-client.key
	I1114 13:35:12.738780 1252456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/proxy-client.crt with IP's: []
	I1114 13:35:13.284623 1252456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/proxy-client.crt ...
	I1114 13:35:13.284657 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/proxy-client.crt: {Name:mkd38708f4eee5ad5281c228966ea87e55bc5759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:13.285366 1252456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/proxy-client.key ...
	I1114 13:35:13.285385 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/proxy-client.key: {Name:mk36895c6a4c34050f1dafe076d75eaa9d180f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:13.285985 1252456 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 13:35:13.286030 1252456 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem (1078 bytes)
	I1114 13:35:13.286060 1252456 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/cert.pem (1123 bytes)
	I1114 13:35:13.286098 1252456 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/key.pem (1679 bytes)
	I1114 13:35:13.286764 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 13:35:13.317971 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 13:35:13.348035 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 13:35:13.377558 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 13:35:13.407462 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 13:35:13.437487 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 13:35:13.467576 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 13:35:13.498350 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 13:35:13.528256 1252456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 13:35:13.558149 1252456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 13:35:13.579901 1252456 ssh_runner.go:195] Run: openssl version
	I1114 13:35:13.587185 1252456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 13:35:13.599557 1252456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:35:13.604366 1252456 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 13:35 /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:35:13.604435 1252456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:35:13.615728 1252456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 13:35:13.628238 1252456 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 13:35:13.632927 1252456 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 13:35:13.633022 1252456 kubeadm.go:404] StartCluster: {Name:addons-135796 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-135796 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:35:13.633111 1252456 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1114 13:35:13.633185 1252456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 13:35:13.678265 1252456 cri.go:89] found id: ""
	I1114 13:35:13.678364 1252456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 13:35:13.690963 1252456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 13:35:13.702577 1252456 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1114 13:35:13.702655 1252456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 13:35:13.714120 1252456 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 13:35:13.714188 1252456 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1114 13:35:13.768517 1252456 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 13:35:13.768617 1252456 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 13:35:13.814071 1252456 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1114 13:35:13.814183 1252456 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1114 13:35:13.814247 1252456 kubeadm.go:322] OS: Linux
	I1114 13:35:13.814314 1252456 kubeadm.go:322] CGROUPS_CPU: enabled
	I1114 13:35:13.814393 1252456 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1114 13:35:13.814461 1252456 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1114 13:35:13.814540 1252456 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1114 13:35:13.814609 1252456 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1114 13:35:13.814688 1252456 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1114 13:35:13.814763 1252456 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1114 13:35:13.814836 1252456 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1114 13:35:13.814899 1252456 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1114 13:35:13.893464 1252456 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 13:35:13.893583 1252456 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 13:35:13.893676 1252456 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 13:35:14.154521 1252456 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 13:35:14.157762 1252456 out.go:204]   - Generating certificates and keys ...
	I1114 13:35:14.157902 1252456 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 13:35:14.157977 1252456 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 13:35:14.355818 1252456 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 13:35:14.819055 1252456 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 13:35:15.615878 1252456 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 13:35:16.213931 1252456 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 13:35:16.508862 1252456 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 13:35:16.509255 1252456 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-135796 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1114 13:35:16.736241 1252456 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 13:35:16.736646 1252456 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-135796 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1114 13:35:16.996041 1252456 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 13:35:17.982641 1252456 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 13:35:19.030778 1252456 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 13:35:19.031078 1252456 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 13:35:19.477358 1252456 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 13:35:20.157945 1252456 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 13:35:21.835973 1252456 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 13:35:22.452870 1252456 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 13:35:22.453915 1252456 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 13:35:22.457991 1252456 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 13:35:22.460042 1252456 out.go:204]   - Booting up control plane ...
	I1114 13:35:22.460209 1252456 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 13:35:22.460301 1252456 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 13:35:22.461667 1252456 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 13:35:22.476716 1252456 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 13:35:22.477805 1252456 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 13:35:22.477888 1252456 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 13:35:22.588042 1252456 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 13:35:30.092521 1252456 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.504750 seconds
	I1114 13:35:30.092670 1252456 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 13:35:30.114028 1252456 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 13:35:30.652379 1252456 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 13:35:30.652566 1252456 kubeadm.go:322] [mark-control-plane] Marking the node addons-135796 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 13:35:31.164444 1252456 kubeadm.go:322] [bootstrap-token] Using token: xaby4z.mxku5b0b05ilq7vq
	I1114 13:35:31.166456 1252456 out.go:204]   - Configuring RBAC rules ...
	I1114 13:35:31.166577 1252456 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 13:35:31.172279 1252456 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 13:35:31.180556 1252456 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 13:35:31.184632 1252456 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 13:35:31.190155 1252456 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 13:35:31.194228 1252456 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 13:35:31.208222 1252456 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 13:35:31.439112 1252456 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 13:35:31.588854 1252456 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 13:35:31.589808 1252456 kubeadm.go:322] 
	I1114 13:35:31.589891 1252456 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 13:35:31.589906 1252456 kubeadm.go:322] 
	I1114 13:35:31.590007 1252456 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 13:35:31.590017 1252456 kubeadm.go:322] 
	I1114 13:35:31.590041 1252456 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 13:35:31.590117 1252456 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 13:35:31.590179 1252456 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 13:35:31.590192 1252456 kubeadm.go:322] 
	I1114 13:35:31.590264 1252456 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 13:35:31.590274 1252456 kubeadm.go:322] 
	I1114 13:35:31.590322 1252456 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 13:35:31.590337 1252456 kubeadm.go:322] 
	I1114 13:35:31.590412 1252456 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 13:35:31.590493 1252456 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 13:35:31.590565 1252456 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 13:35:31.590574 1252456 kubeadm.go:322] 
	I1114 13:35:31.590673 1252456 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 13:35:31.590748 1252456 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 13:35:31.590756 1252456 kubeadm.go:322] 
	I1114 13:35:31.590847 1252456 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xaby4z.mxku5b0b05ilq7vq \
	I1114 13:35:31.590952 1252456 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:94fcb55293605b3288e68ff2d845228e62826801cfd59b170f6499414c73b553 \
	I1114 13:35:31.590978 1252456 kubeadm.go:322] 	--control-plane 
	I1114 13:35:31.590985 1252456 kubeadm.go:322] 
	I1114 13:35:31.591079 1252456 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 13:35:31.591091 1252456 kubeadm.go:322] 
	I1114 13:35:31.591174 1252456 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xaby4z.mxku5b0b05ilq7vq \
	I1114 13:35:31.591272 1252456 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:94fcb55293605b3288e68ff2d845228e62826801cfd59b170f6499414c73b553 
	I1114 13:35:31.595952 1252456 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1114 13:35:31.596096 1252456 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 13:35:31.596121 1252456 cni.go:84] Creating CNI manager for ""
	I1114 13:35:31.596133 1252456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1114 13:35:31.598165 1252456 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1114 13:35:31.599880 1252456 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 13:35:31.610717 1252456 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 13:35:31.610735 1252456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 13:35:31.651211 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 13:35:32.606062 1252456 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 13:35:32.606203 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:32.606287 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b minikube.k8s.io/name=addons-135796 minikube.k8s.io/updated_at=2023_11_14T13_35_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:32.779148 1252456 ops.go:34] apiserver oom_adj: -16
	I1114 13:35:32.779239 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:32.903768 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:33.501175 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:34.003029 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:34.501924 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:35.008159 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:35.501627 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:36.003724 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:36.501165 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:37.002263 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:37.501506 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:38.007423 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:38.501074 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:39.004656 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:39.501725 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:40.001102 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:40.501765 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:41.001310 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:41.501635 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:42.004540 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:42.500966 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:43.006729 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:43.501371 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:44.003315 1252456 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:44.159997 1252456 kubeadm.go:1081] duration metric: took 11.553843639s to wait for elevateKubeSystemPrivileges.
	I1114 13:35:44.160024 1252456 kubeadm.go:406] StartCluster complete in 30.527005785s
	I1114 13:35:44.160042 1252456 settings.go:142] acquiring lock: {Name:mk455c6657f7b4efcfce9307d68afe3ebcb2d6b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:44.160155 1252456 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17581-1246551/kubeconfig
	I1114 13:35:44.160591 1252456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/kubeconfig: {Name:mk184f3168528a648dd99c6da0ef538261acbd95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:44.161183 1252456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 13:35:44.161451 1252456 config.go:182] Loaded profile config "addons-135796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1114 13:35:44.161610 1252456 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1114 13:35:44.161695 1252456 addons.go:69] Setting volumesnapshots=true in profile "addons-135796"
	I1114 13:35:44.161714 1252456 addons.go:231] Setting addon volumesnapshots=true in "addons-135796"
	I1114 13:35:44.161769 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.162220 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.163612 1252456 addons.go:69] Setting ingress-dns=true in profile "addons-135796"
	I1114 13:35:44.163635 1252456 addons.go:231] Setting addon ingress-dns=true in "addons-135796"
	I1114 13:35:44.163677 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.164104 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.164373 1252456 addons.go:69] Setting inspektor-gadget=true in profile "addons-135796"
	I1114 13:35:44.164395 1252456 addons.go:231] Setting addon inspektor-gadget=true in "addons-135796"
	I1114 13:35:44.164438 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.164898 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.165244 1252456 addons.go:69] Setting cloud-spanner=true in profile "addons-135796"
	I1114 13:35:44.165272 1252456 addons.go:231] Setting addon cloud-spanner=true in "addons-135796"
	I1114 13:35:44.165307 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.165703 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.173205 1252456 addons.go:69] Setting metrics-server=true in profile "addons-135796"
	I1114 13:35:44.173248 1252456 addons.go:231] Setting addon metrics-server=true in "addons-135796"
	I1114 13:35:44.173296 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.173732 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.181860 1252456 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-135796"
	I1114 13:35:44.181931 1252456 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-135796"
	I1114 13:35:44.181980 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.182432 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.182827 1252456 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-135796"
	I1114 13:35:44.182854 1252456 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-135796"
	I1114 13:35:44.182968 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.183374 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.189214 1252456 addons.go:69] Setting default-storageclass=true in profile "addons-135796"
	I1114 13:35:44.189250 1252456 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-135796"
	I1114 13:35:44.189605 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.190257 1252456 addons.go:69] Setting registry=true in profile "addons-135796"
	I1114 13:35:44.190306 1252456 addons.go:231] Setting addon registry=true in "addons-135796"
	I1114 13:35:44.190353 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.190897 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.200903 1252456 addons.go:69] Setting gcp-auth=true in profile "addons-135796"
	I1114 13:35:44.200951 1252456 mustload.go:65] Loading cluster: addons-135796
	I1114 13:35:44.201179 1252456 config.go:182] Loaded profile config "addons-135796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1114 13:35:44.201437 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.201573 1252456 addons.go:69] Setting storage-provisioner=true in profile "addons-135796"
	I1114 13:35:44.201589 1252456 addons.go:231] Setting addon storage-provisioner=true in "addons-135796"
	I1114 13:35:44.201627 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.202013 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.221803 1252456 addons.go:69] Setting ingress=true in profile "addons-135796"
	I1114 13:35:44.221839 1252456 addons.go:231] Setting addon ingress=true in "addons-135796"
	I1114 13:35:44.221897 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.222340 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.222975 1252456 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-135796"
	I1114 13:35:44.222997 1252456 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-135796"
	I1114 13:35:44.223262 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.366400 1252456 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1114 13:35:44.368077 1252456 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1114 13:35:44.368134 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1114 13:35:44.368241 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.386225 1252456 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1114 13:35:44.395618 1252456 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1114 13:35:44.395683 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1114 13:35:44.395788 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.409885 1252456 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:35:44.414375 1252456 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 13:35:44.414457 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 13:35:44.414554 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.430701 1252456 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1114 13:35:44.432506 1252456 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1114 13:35:44.432532 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1114 13:35:44.432685 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.436335 1252456 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1114 13:35:44.437999 1252456 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 13:35:44.438019 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 13:35:44.438111 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.447761 1252456 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1114 13:35:44.449841 1252456 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1114 13:35:44.449877 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1114 13:35:44.449960 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.459672 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.463750 1252456 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1114 13:35:44.466160 1252456 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1114 13:35:44.467812 1252456 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1114 13:35:44.469318 1252456 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1114 13:35:44.471129 1252456 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1114 13:35:44.473093 1252456 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1114 13:35:44.475200 1252456 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1114 13:35:44.374786 1252456 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 13:35:44.484558 1252456 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1114 13:35:44.485185 1252456 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-135796" context rescaled to 1 replicas
	I1114 13:35:44.489331 1252456 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-135796"
	I1114 13:35:44.490365 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.490836 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.491126 1252456 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1114 13:35:44.495219 1252456 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1114 13:35:44.495256 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1114 13:35:44.495328 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.491450 1252456 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1114 13:35:44.507637 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1114 13:35:44.507721 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.491457 1252456 out.go:177]   - Using image docker.io/registry:2.8.3
	I1114 13:35:44.492716 1252456 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1114 13:35:44.492745 1252456 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1114 13:35:44.522091 1252456 addons.go:231] Setting addon default-storageclass=true in "addons-135796"
	I1114 13:35:44.527684 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:44.528199 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:44.530310 1252456 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 13:35:44.535338 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:44.543648 1252456 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 13:35:44.542063 1252456 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1114 13:35:44.542071 1252456 out.go:177] * Verifying Kubernetes components...
	I1114 13:35:44.555064 1252456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:35:44.559208 1252456 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1114 13:35:44.559233 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1114 13:35:44.559298 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.553198 1252456 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1114 13:35:44.560950 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1114 13:35:44.561033 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.646387 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:44.692377 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:44.693073 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:44.738635 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:44.750576 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:44.757610 1252456 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 13:35:44.757631 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 13:35:44.757696 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.771562 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:44.781005 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:44.787219 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:44.800455 1252456 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1114 13:35:44.807464 1252456 out.go:177]   - Using image docker.io/busybox:stable
	I1114 13:35:44.813667 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:44.813921 1252456 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1114 13:35:44.813938 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1114 13:35:44.813998 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:44.849705 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	W1114 13:35:44.851490 1252456 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1114 13:35:44.851513 1252456 retry.go:31] will retry after 215.206085ms: ssh: handshake failed: EOF
	I1114 13:35:44.859756 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:45.235501 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 13:35:45.359087 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1114 13:35:45.399145 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1114 13:35:45.450305 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1114 13:35:45.492896 1252456 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1114 13:35:45.492923 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1114 13:35:45.519300 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1114 13:35:45.554923 1252456 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 13:35:45.554950 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1114 13:35:45.567060 1252456 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1114 13:35:45.567093 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1114 13:35:45.623858 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1114 13:35:45.635400 1252456 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1114 13:35:45.635434 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1114 13:35:45.681852 1252456 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1114 13:35:45.681879 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1114 13:35:45.692783 1252456 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1114 13:35:45.692828 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1114 13:35:45.879279 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1114 13:35:45.932386 1252456 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1114 13:35:45.932457 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1114 13:35:46.014406 1252456 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 13:35:46.014486 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 13:35:46.030986 1252456 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1114 13:35:46.031062 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1114 13:35:46.050644 1252456 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1114 13:35:46.050667 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1114 13:35:46.051764 1252456 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1114 13:35:46.051781 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1114 13:35:46.223157 1252456 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1114 13:35:46.223180 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1114 13:35:46.305373 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 13:35:46.350359 1252456 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 13:35:46.350433 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 13:35:46.361760 1252456 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1114 13:35:46.361828 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1114 13:35:46.399365 1252456 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 13:35:46.399442 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1114 13:35:46.416032 1252456 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1114 13:35:46.416125 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1114 13:35:46.418986 1252456 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1114 13:35:46.419059 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1114 13:35:46.555526 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 13:35:46.587313 1252456 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1114 13:35:46.587390 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1114 13:35:46.724763 1252456 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1114 13:35:46.724843 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1114 13:35:46.730118 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 13:35:46.948550 1252456 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1114 13:35:46.948621 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1114 13:35:46.959613 1252456 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1114 13:35:46.959684 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1114 13:35:47.042723 1252456 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.564316762s)
	I1114 13:35:47.042756 1252456 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1114 13:35:47.042780 1252456 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.487692698s)
	I1114 13:35:47.043636 1252456 node_ready.go:35] waiting up to 6m0s for node "addons-135796" to be "Ready" ...
	I1114 13:35:47.047215 1252456 node_ready.go:49] node "addons-135796" has status "Ready":"True"
	I1114 13:35:47.047282 1252456 node_ready.go:38] duration metric: took 3.627348ms waiting for node "addons-135796" to be "Ready" ...
	I1114 13:35:47.047327 1252456 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 13:35:47.055643 1252456 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace to be "Ready" ...
	I1114 13:35:47.179564 1252456 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1114 13:35:47.179634 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1114 13:35:47.237991 1252456 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1114 13:35:47.238068 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1114 13:35:47.399265 1252456 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1114 13:35:47.399336 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1114 13:35:47.487672 1252456 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1114 13:35:47.487743 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1114 13:35:47.670363 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1114 13:35:47.702041 1252456 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1114 13:35:47.702064 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1114 13:35:47.993361 1252456 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1114 13:35:47.993432 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1114 13:35:48.152268 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1114 13:35:49.076972 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:35:49.617070 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.381523715s)
	I1114 13:35:49.617158 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.258041326s)
	I1114 13:35:51.089649 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:35:51.272624 1252456 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1114 13:35:51.272725 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:51.304415 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:51.592643 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.193458178s)
	I1114 13:35:51.592676 1252456 addons.go:467] Verifying addon ingress=true in "addons-135796"
	I1114 13:35:51.592713 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.142309588s)
	I1114 13:35:51.594988 1252456 out.go:177] * Verifying ingress addon...
	I1114 13:35:51.592837 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.073510706s)
	I1114 13:35:51.592871 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.968988152s)
	I1114 13:35:51.592901 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.713547962s)
	I1114 13:35:51.592943 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.287500187s)
	I1114 13:35:51.593105 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.037505041s)
	I1114 13:35:51.593240 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.922849946s)
	I1114 13:35:51.593262 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.863069697s)
	W1114 13:35:51.598231 1252456 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1114 13:35:51.598260 1252456 retry.go:31] will retry after 154.389314ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1114 13:35:51.597073 1252456 addons.go:467] Verifying addon registry=true in "addons-135796"
	I1114 13:35:51.601596 1252456 out.go:177] * Verifying registry addon...
	I1114 13:35:51.597928 1252456 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1114 13:35:51.598101 1252456 addons.go:467] Verifying addon metrics-server=true in "addons-135796"
	I1114 13:35:51.604506 1252456 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1114 13:35:51.626081 1252456 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1114 13:35:51.626148 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:51.629099 1252456 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1114 13:35:51.629592 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1114 13:35:51.629529 1252456 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1114 13:35:51.639252 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:51.645119 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:51.662471 1252456 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1114 13:35:51.715453 1252456 addons.go:231] Setting addon gcp-auth=true in "addons-135796"
	I1114 13:35:51.715543 1252456 host.go:66] Checking if "addons-135796" exists ...
	I1114 13:35:51.716030 1252456 cli_runner.go:164] Run: docker container inspect addons-135796 --format={{.State.Status}}
	I1114 13:35:51.750494 1252456 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1114 13:35:51.750548 1252456 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-135796
	I1114 13:35:51.752982 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 13:35:51.780999 1252456 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34332 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/addons-135796/id_rsa Username:docker}
	I1114 13:35:52.158812 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:52.241981 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:52.672408 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:52.673586 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:53.145223 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:53.150968 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:53.495337 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.342948214s)
	I1114 13:35:53.495438 1252456 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-135796"
	I1114 13:35:53.497661 1252456 out.go:177] * Verifying csi-hostpath-driver addon...
	I1114 13:35:53.500524 1252456 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1114 13:35:53.506780 1252456 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1114 13:35:53.506848 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:53.512703 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:53.584503 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:35:53.646929 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:53.654887 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:53.838013 1252456 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.084985667s)
	I1114 13:35:53.838148 1252456 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.087633168s)
	I1114 13:35:53.841994 1252456 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 13:35:53.844189 1252456 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1114 13:35:53.846524 1252456 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1114 13:35:53.846595 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1114 13:35:53.877435 1252456 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1114 13:35:53.877466 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1114 13:35:53.915725 1252456 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1114 13:35:53.915747 1252456 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1114 13:35:53.950081 1252456 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1114 13:35:54.019523 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:54.144056 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:54.155368 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:54.518338 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:54.646616 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:54.651324 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:54.864463 1252456 addons.go:467] Verifying addon gcp-auth=true in "addons-135796"
	I1114 13:35:54.866411 1252456 out.go:177] * Verifying gcp-auth addon...
	I1114 13:35:54.870852 1252456 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1114 13:35:54.875993 1252456 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1114 13:35:54.876055 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:54.882877 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:55.020895 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:55.144502 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:55.150468 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:55.387520 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:55.520080 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:55.644884 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:55.650599 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:55.886678 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:56.019375 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:56.077528 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:35:56.144874 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:56.151031 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:56.387492 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:56.522824 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:56.644845 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:56.650958 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:56.887209 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:57.018945 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:57.144150 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:57.150474 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:57.387342 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:57.519979 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:57.644206 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:57.650777 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:57.887561 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:58.020911 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:58.079196 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:35:58.145434 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:58.150996 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:58.386981 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:58.528487 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:58.645266 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:58.650029 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:58.887143 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:59.018878 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:59.144442 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:59.151472 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:59.386989 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:59.519377 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:59.644866 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:59.651265 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:59.886754 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:00.036452 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:00.082872 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:00.185521 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:00.188950 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:00.387787 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:00.519303 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:00.644025 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:00.651000 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:00.887099 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:01.019831 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:01.145307 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:01.150233 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:01.387834 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:01.520368 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:01.645740 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:01.655400 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:01.888358 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:02.030379 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:02.144335 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:02.150229 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:02.387477 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:02.519768 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:02.576062 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:02.645216 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:02.650284 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:02.887255 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:03.019396 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:03.144130 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:03.151072 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:03.386907 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:03.518955 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:03.645777 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:03.650532 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:03.887837 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:04.019561 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:04.144211 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:04.150112 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:04.387096 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:04.519064 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:04.576723 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:04.643933 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:04.659202 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:04.887163 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:05.018718 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:05.144889 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:05.150815 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:05.386793 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:05.519482 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:05.644783 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:05.649792 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:05.886654 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:06.019100 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:06.143903 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:06.150686 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:06.386572 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:06.519866 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:06.644722 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:06.652341 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:06.887505 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:07.019379 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:07.076334 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:07.143888 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:07.150696 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:07.386893 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:07.524535 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:07.644544 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:07.650265 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:07.887350 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:08.019188 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:08.144919 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:08.150832 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:08.386622 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:08.518743 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:08.644035 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:08.650652 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:08.887078 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:09.019829 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:09.076757 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:09.143661 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:09.150128 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:09.387561 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:09.519677 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:09.648456 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:09.651698 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:09.886607 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:10.032632 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:10.144248 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:10.149843 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:10.387127 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:10.519123 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:10.644722 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:10.650318 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:10.887466 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:11.019290 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:11.077311 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:11.143909 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:11.150662 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:11.386863 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:11.518296 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:11.644625 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:11.649997 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:11.886938 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:12.019241 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:12.144338 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:12.150251 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:12.387452 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:12.518433 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:12.644344 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:12.650057 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:12.886961 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:13.018693 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:13.145322 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:13.149860 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:13.387650 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:13.519158 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:13.577581 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:13.644242 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:13.650532 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:13.887525 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:14.019313 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:14.144132 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:14.150649 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:14.387213 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:14.521459 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:14.644066 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:14.650822 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:14.886986 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:15.023977 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:15.157785 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:15.158827 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:15.386982 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:15.519206 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:15.644949 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:15.652217 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:15.886550 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:16.019694 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:16.078145 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:16.145107 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:16.150589 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:16.386686 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:16.519277 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:16.644511 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:16.651331 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:16.887488 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:17.019240 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:17.144616 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:17.150482 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:17.391864 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:17.521768 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:17.644905 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:17.655023 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:17.887316 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:18.019233 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:18.144465 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:18.151330 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:18.388006 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:18.519090 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:18.576172 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:18.645261 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:18.650161 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:18.887787 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:19.019499 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:19.144319 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:19.150546 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:19.388117 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:19.518218 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:19.644959 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:19.650821 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:19.886982 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:20.020350 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:20.145213 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:20.150612 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:20.386510 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:20.521124 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:20.579989 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:20.645048 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:20.651998 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:20.887731 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:21.021036 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:21.145030 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:21.151760 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:21.387484 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:21.519903 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:21.645460 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:21.650293 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:21.888716 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:22.020274 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:22.144867 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:22.150868 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:22.390609 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:22.527505 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:22.584910 1252456 pod_ready.go:102] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:22.645181 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:22.651074 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:22.887612 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:23.021819 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:23.144446 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:23.150661 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:23.387083 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:23.518930 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:23.646136 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:23.652085 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:23.887393 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:24.018883 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:24.082956 1252456 pod_ready.go:92] pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:24.083056 1252456 pod_ready.go:81] duration metric: took 37.027373752s waiting for pod "coredns-5dd5756b68-kbgpr" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:24.083084 1252456 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-135796" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:24.116939 1252456 pod_ready.go:92] pod "etcd-addons-135796" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:24.116967 1252456 pod_ready.go:81] duration metric: took 33.861228ms waiting for pod "etcd-addons-135796" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:24.116985 1252456 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-135796" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:24.130826 1252456 pod_ready.go:92] pod "kube-apiserver-addons-135796" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:24.130849 1252456 pod_ready.go:81] duration metric: took 13.856887ms waiting for pod "kube-apiserver-addons-135796" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:24.130862 1252456 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-135796" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:24.152350 1252456 pod_ready.go:92] pod "kube-controller-manager-addons-135796" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:24.152513 1252456 pod_ready.go:81] duration metric: took 21.631883ms waiting for pod "kube-controller-manager-addons-135796" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:24.152541 1252456 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bhsf2" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:24.152921 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:24.156736 1252456 kapi.go:107] duration metric: took 32.552223177s to wait for kubernetes.io/minikube-addons=registry ...
	I1114 13:36:24.161901 1252456 pod_ready.go:92] pod "kube-proxy-bhsf2" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:24.161931 1252456 pod_ready.go:81] duration metric: took 9.351297ms waiting for pod "kube-proxy-bhsf2" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:24.161944 1252456 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-135796" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:24.387508 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:24.473713 1252456 pod_ready.go:92] pod "kube-scheduler-addons-135796" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:24.473740 1252456 pod_ready.go:81] duration metric: took 311.787694ms waiting for pod "kube-scheduler-addons-135796" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:24.473750 1252456 pod_ready.go:38] duration metric: took 37.426365829s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 13:36:24.473765 1252456 api_server.go:52] waiting for apiserver process to appear ...
	I1114 13:36:24.473824 1252456 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 13:36:24.491854 1252456 api_server.go:72] duration metric: took 39.962907902s to wait for apiserver process to appear ...
	I1114 13:36:24.491875 1252456 api_server.go:88] waiting for apiserver healthz status ...
	I1114 13:36:24.491893 1252456 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1114 13:36:24.502171 1252456 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1114 13:36:24.503691 1252456 api_server.go:141] control plane version: v1.28.3
	I1114 13:36:24.503717 1252456 api_server.go:131] duration metric: took 11.835377ms to wait for apiserver health ...
	I1114 13:36:24.503726 1252456 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 13:36:24.520560 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:24.644424 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:24.682449 1252456 system_pods.go:59] 18 kube-system pods found
	I1114 13:36:24.682490 1252456 system_pods.go:61] "coredns-5dd5756b68-kbgpr" [dd8760f0-83ba-433f-8258-ddc83d596ea4] Running
	I1114 13:36:24.682500 1252456 system_pods.go:61] "csi-hostpath-attacher-0" [b1f865fa-d766-4554-9fc1-5e240eb60ddd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 13:36:24.682509 1252456 system_pods.go:61] "csi-hostpath-resizer-0" [0aedaa42-f324-4b7c-a480-f6a601bfdd5f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 13:36:24.682519 1252456 system_pods.go:61] "csi-hostpathplugin-gc4wd" [9821f294-2a57-4ec2-b281-c86a356d0cfc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 13:36:24.682532 1252456 system_pods.go:61] "etcd-addons-135796" [66125db3-e33c-444a-bb27-da039a8901f2] Running
	I1114 13:36:24.682538 1252456 system_pods.go:61] "kindnet-594t9" [8ecc81d9-6ef6-4328-baf7-cd956c875d14] Running
	I1114 13:36:24.682547 1252456 system_pods.go:61] "kube-apiserver-addons-135796" [94fbf3c9-8f8c-4e4e-bd12-d5db23209130] Running
	I1114 13:36:24.682553 1252456 system_pods.go:61] "kube-controller-manager-addons-135796" [47f8b9cb-6711-4ba6-8ca2-48d68275a196] Running
	I1114 13:36:24.682560 1252456 system_pods.go:61] "kube-ingress-dns-minikube" [15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 13:36:24.682568 1252456 system_pods.go:61] "kube-proxy-bhsf2" [e95f2d32-0ba4-40d0-81c1-db1065fb2195] Running
	I1114 13:36:24.682575 1252456 system_pods.go:61] "kube-scheduler-addons-135796" [55c2cd5b-5895-403d-9bd0-1bbe5a4455f0] Running
	I1114 13:36:24.682582 1252456 system_pods.go:61] "metrics-server-7c66d45ddc-vq7v7" [f0795cc1-d627-4e83-99c7-ffd6af24b9b9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 13:36:24.682591 1252456 system_pods.go:61] "nvidia-device-plugin-daemonset-drddv" [3aacd730-af31-478b-8629-70f475d2e57a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 13:36:24.682603 1252456 system_pods.go:61] "registry-ntgrp" [a00a4394-1bb1-48dd-9264-07512de6c23b] Running
	I1114 13:36:24.682609 1252456 system_pods.go:61] "registry-proxy-rzmvh" [833f2957-ab82-4119-85d5-4aabb9f89026] Running
	I1114 13:36:24.682614 1252456 system_pods.go:61] "snapshot-controller-58dbcc7b99-2vh5f" [53815c84-dbbd-4d81-bfc0-29a423f9eb1d] Running
	I1114 13:36:24.682619 1252456 system_pods.go:61] "snapshot-controller-58dbcc7b99-ljw5k" [68ef4713-34b8-4238-ad00-99b15b325001] Running
	I1114 13:36:24.682626 1252456 system_pods.go:61] "storage-provisioner" [bb03cb23-1c6a-4149-8c63-3f28e2d701a5] Running
	I1114 13:36:24.682632 1252456 system_pods.go:74] duration metric: took 178.900049ms to wait for pod list to return data ...
	I1114 13:36:24.682643 1252456 default_sa.go:34] waiting for default service account to be created ...
	I1114 13:36:24.872682 1252456 default_sa.go:45] found service account: "default"
	I1114 13:36:24.872709 1252456 default_sa.go:55] duration metric: took 190.059506ms for default service account to be created ...
	I1114 13:36:24.872719 1252456 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 13:36:24.887424 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:25.020886 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:25.087111 1252456 system_pods.go:86] 18 kube-system pods found
	I1114 13:36:25.087157 1252456 system_pods.go:89] "coredns-5dd5756b68-kbgpr" [dd8760f0-83ba-433f-8258-ddc83d596ea4] Running
	I1114 13:36:25.087169 1252456 system_pods.go:89] "csi-hostpath-attacher-0" [b1f865fa-d766-4554-9fc1-5e240eb60ddd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 13:36:25.087180 1252456 system_pods.go:89] "csi-hostpath-resizer-0" [0aedaa42-f324-4b7c-a480-f6a601bfdd5f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 13:36:25.087195 1252456 system_pods.go:89] "csi-hostpathplugin-gc4wd" [9821f294-2a57-4ec2-b281-c86a356d0cfc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 13:36:25.087202 1252456 system_pods.go:89] "etcd-addons-135796" [66125db3-e33c-444a-bb27-da039a8901f2] Running
	I1114 13:36:25.087215 1252456 system_pods.go:89] "kindnet-594t9" [8ecc81d9-6ef6-4328-baf7-cd956c875d14] Running
	I1114 13:36:25.087232 1252456 system_pods.go:89] "kube-apiserver-addons-135796" [94fbf3c9-8f8c-4e4e-bd12-d5db23209130] Running
	I1114 13:36:25.087249 1252456 system_pods.go:89] "kube-controller-manager-addons-135796" [47f8b9cb-6711-4ba6-8ca2-48d68275a196] Running
	I1114 13:36:25.087267 1252456 system_pods.go:89] "kube-ingress-dns-minikube" [15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 13:36:25.087282 1252456 system_pods.go:89] "kube-proxy-bhsf2" [e95f2d32-0ba4-40d0-81c1-db1065fb2195] Running
	I1114 13:36:25.087294 1252456 system_pods.go:89] "kube-scheduler-addons-135796" [55c2cd5b-5895-403d-9bd0-1bbe5a4455f0] Running
	I1114 13:36:25.087303 1252456 system_pods.go:89] "metrics-server-7c66d45ddc-vq7v7" [f0795cc1-d627-4e83-99c7-ffd6af24b9b9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1114 13:36:25.087313 1252456 system_pods.go:89] "nvidia-device-plugin-daemonset-drddv" [3aacd730-af31-478b-8629-70f475d2e57a] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1114 13:36:25.087324 1252456 system_pods.go:89] "registry-ntgrp" [a00a4394-1bb1-48dd-9264-07512de6c23b] Running
	I1114 13:36:25.087334 1252456 system_pods.go:89] "registry-proxy-rzmvh" [833f2957-ab82-4119-85d5-4aabb9f89026] Running
	I1114 13:36:25.087340 1252456 system_pods.go:89] "snapshot-controller-58dbcc7b99-2vh5f" [53815c84-dbbd-4d81-bfc0-29a423f9eb1d] Running
	I1114 13:36:25.087346 1252456 system_pods.go:89] "snapshot-controller-58dbcc7b99-ljw5k" [68ef4713-34b8-4238-ad00-99b15b325001] Running
	I1114 13:36:25.087351 1252456 system_pods.go:89] "storage-provisioner" [bb03cb23-1c6a-4149-8c63-3f28e2d701a5] Running
	I1114 13:36:25.087360 1252456 system_pods.go:126] duration metric: took 214.635825ms to wait for k8s-apps to be running ...
	I1114 13:36:25.087374 1252456 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 13:36:25.087438 1252456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:36:25.105548 1252456 system_svc.go:56] duration metric: took 18.165274ms WaitForService to wait for kubelet.
	I1114 13:36:25.105602 1252456 kubeadm.go:581] duration metric: took 40.57663803s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 13:36:25.105623 1252456 node_conditions.go:102] verifying NodePressure condition ...
	I1114 13:36:25.145225 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:25.275911 1252456 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1114 13:36:25.275947 1252456 node_conditions.go:123] node cpu capacity is 2
	I1114 13:36:25.275959 1252456 node_conditions.go:105] duration metric: took 170.326292ms to run NodePressure ...
	I1114 13:36:25.275984 1252456 start.go:228] waiting for startup goroutines ...
	I1114 13:36:25.390119 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:25.518690 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:25.644436 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:25.887319 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:26.019939 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:26.144291 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:26.388024 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:26.519934 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:26.644703 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:26.887068 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:27.019387 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:27.144642 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:27.387320 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:27.519370 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:27.644002 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:27.887365 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:28.025140 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:28.146177 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:28.387228 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:28.519778 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:28.644822 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:28.886695 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:29.018924 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:29.145496 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:29.387138 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:29.519794 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:29.643901 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:29.887167 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:30.026713 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:30.159369 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:30.387280 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:30.519569 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:30.644229 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:30.887229 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:31.019321 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:31.145102 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:31.387516 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:31.520468 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:31.643980 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:31.886810 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:32.024934 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:32.145747 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:32.388156 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:32.520262 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:32.644869 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:32.888280 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:33.018953 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:33.144248 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:33.387195 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:33.522566 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:33.644588 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:33.887465 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:34.019109 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:34.144903 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:34.391417 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:34.520238 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:34.644327 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:34.887198 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:35.019667 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:35.145140 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:35.386939 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:35.519115 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:35.645145 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:35.887389 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:36.019845 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:36.146054 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:36.388191 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:36.520864 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:36.644972 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:36.890359 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:37.036678 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:37.149922 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:37.387638 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:37.522507 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:37.645587 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:37.887996 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:38.027404 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:38.146426 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:38.388411 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:38.527963 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:38.644581 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:38.887637 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:39.020178 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:39.146130 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:39.387236 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:39.519100 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:39.645230 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:39.889325 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:40.030074 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:40.145143 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:40.388125 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:40.519367 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:40.644927 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:40.886475 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:41.019706 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:41.144381 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:41.387781 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:41.519047 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:41.644834 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:41.887613 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:42.025789 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:42.145252 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:42.387587 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:42.519408 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:42.645647 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:42.886670 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:43.019863 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:43.145243 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:43.387121 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:43.519324 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:43.645067 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:43.887081 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:44.028093 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:44.144637 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:44.387481 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:44.519380 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:44.644536 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:44.887527 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:45.047825 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:45.170110 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:45.387838 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:45.519997 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:45.645413 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:45.887287 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:46.019408 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:46.144044 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:46.387303 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:46.518597 1252456 kapi.go:107] duration metric: took 53.018069299s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1114 13:36:46.644216 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:46.887116 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:47.144874 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:47.387133 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:47.643808 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:47.887198 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:48.144199 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:48.387140 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:48.644664 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:48.886446 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:49.144376 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:49.387284 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:49.644009 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:49.886961 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:50.145123 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:50.386584 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:50.644835 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:50.886565 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:51.145068 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:51.387293 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:51.645060 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:51.886860 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:52.144713 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:52.386707 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:52.644746 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:52.886635 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:53.144301 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:53.386795 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:53.644233 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:53.887166 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:54.144083 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:54.386919 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:54.644708 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:54.886653 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:55.144546 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:55.387017 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:55.644717 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:55.887594 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:56.143986 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:56.386878 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:56.644102 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:56.887730 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:57.144677 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:57.387488 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:57.644455 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:57.887479 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:58.144768 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:58.390545 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:58.644715 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:58.886957 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:59.145197 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:59.387055 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:59.644219 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:59.887291 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:00.177890 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:00.387259 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:00.643577 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:00.889722 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:01.145238 1252456 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:01.388755 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:01.644683 1252456 kapi.go:107] duration metric: took 1m10.046747937s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1114 13:37:01.886940 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:02.387392 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:02.888098 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:03.387338 1252456 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:03.886554 1252456 kapi.go:107] duration metric: took 1m9.015704503s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1114 13:37:03.888534 1252456 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-135796 cluster.
	I1114 13:37:03.890273 1252456 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1114 13:37:03.891951 1252456 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1114 13:37:03.893757 1252456 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, nvidia-device-plugin, ingress-dns, inspektor-gadget, metrics-server, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1114 13:37:03.895531 1252456 addons.go:502] enable addons completed in 1m19.733914804s: enabled=[storage-provisioner cloud-spanner nvidia-device-plugin ingress-dns inspektor-gadget metrics-server storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1114 13:37:03.895581 1252456 start.go:233] waiting for cluster config update ...
	I1114 13:37:03.895615 1252456 start.go:242] writing updated cluster config ...
	I1114 13:37:03.895931 1252456 ssh_runner.go:195] Run: rm -f paused
	I1114 13:37:03.970474 1252456 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 13:37:03.972667 1252456 out.go:177] * Done! kubectl is now configured to use "addons-135796" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2ad703c5715d5       dd1b12fcb6097       6 seconds ago       Exited              hello-world-app           2                   ce6711f4407f8       hello-world-app-5d77478584-mdrs6
	c32cc7dbb6838       aae348c9fbd40       33 seconds ago      Running             nginx                     0                   e7f9c7a44a765       nginx
	74243f74c9b9d       14b04e7ab95a8       43 seconds ago      Running             headlamp                  0                   abc7a69fedbfd       headlamp-777fd4b855-mk5hh
	895dc333c21f5       2a5f29343eb03       2 minutes ago       Running             gcp-auth                  0                   228dc895c7c59       gcp-auth-d4c87556c-wzj2h
	66d6edb4c6583       f065bfef03d73       2 minutes ago       Exited              controller                0                   2bab4ddad7d88       ingress-nginx-controller-7c6974c4d8-dpc9j
	33c66762bdc4e       af594c6a879f2       2 minutes ago       Exited              patch                     2                   3839aa121b03e       ingress-nginx-admission-patch-5sk97
	e64dc2a065e23       af594c6a879f2       2 minutes ago       Exited              create                    0                   13dc239f06442       ingress-nginx-admission-create-vjv4j
	1ffe826279e77       97e04611ad434       2 minutes ago       Running             coredns                   0                   5acc94edb3512       coredns-5dd5756b68-kbgpr
	8ee42b5af0ad5       ba04bb24b9575       3 minutes ago       Running             storage-provisioner       0                   fb7933148c31c       storage-provisioner
	0f9b74c79cc02       04b4eaa3d3db8       3 minutes ago       Running             kindnet-cni               0                   bcf92c227d1d6       kindnet-594t9
	8f6726f5f0c00       a5dd5cdd6d3ef       3 minutes ago       Running             kube-proxy                0                   b8ba5cba71c66       kube-proxy-bhsf2
	355f5252ffd93       42a4e73724daa       3 minutes ago       Running             kube-scheduler            0                   957973d3a66c4       kube-scheduler-addons-135796
	3a63089219e16       8276439b4f237       3 minutes ago       Running             kube-controller-manager   0                   f416f44996657       kube-controller-manager-addons-135796
	82ac0684a5fa7       537e9a59ee2fd       3 minutes ago       Running             kube-apiserver            0                   19ae57d5f5ce0       kube-apiserver-addons-135796
	a6c59cec296aa       9cdd6470f48c8       3 minutes ago       Running             etcd                      0                   4acffe546c087       etcd-addons-135796
	
	* 
	* ==> containerd <==
	* Nov 14 13:38:58 addons-135796 containerd[745]: time="2023-11-14T13:38:58.663381406Z" level=info msg="Stop container \"66d6edb4c6583ccea1bbf9812414814936c0ce76d5eee85f8f60992f929e03c2\" with signal terminated"
	Nov 14 13:38:59 addons-135796 containerd[745]: time="2023-11-14T13:38:59.599236954Z" level=info msg="CreateContainer within sandbox \"ce6711f4407f8ffe31ec852dc9d8cad2aed1c235634e858fba78dfffb5d0728c\" for container &ContainerMetadata{Name:hello-world-app,Attempt:2,}"
	Nov 14 13:38:59 addons-135796 containerd[745]: time="2023-11-14T13:38:59.614893359Z" level=info msg="CreateContainer within sandbox \"ce6711f4407f8ffe31ec852dc9d8cad2aed1c235634e858fba78dfffb5d0728c\" for &ContainerMetadata{Name:hello-world-app,Attempt:2,} returns container id \"2ad703c5715d5c0f617441e835d2415e4582eb15d2201c1d5eab5cb8272d941e\""
	Nov 14 13:38:59 addons-135796 containerd[745]: time="2023-11-14T13:38:59.615830398Z" level=info msg="StartContainer for \"2ad703c5715d5c0f617441e835d2415e4582eb15d2201c1d5eab5cb8272d941e\""
	Nov 14 13:38:59 addons-135796 containerd[745]: time="2023-11-14T13:38:59.702859178Z" level=info msg="StartContainer for \"2ad703c5715d5c0f617441e835d2415e4582eb15d2201c1d5eab5cb8272d941e\" returns successfully"
	Nov 14 13:38:59 addons-135796 containerd[745]: time="2023-11-14T13:38:59.732417844Z" level=info msg="shim disconnected" id=2ad703c5715d5c0f617441e835d2415e4582eb15d2201c1d5eab5cb8272d941e
	Nov 14 13:38:59 addons-135796 containerd[745]: time="2023-11-14T13:38:59.732648235Z" level=warning msg="cleaning up after shim disconnected" id=2ad703c5715d5c0f617441e835d2415e4582eb15d2201c1d5eab5cb8272d941e namespace=k8s.io
	Nov 14 13:38:59 addons-135796 containerd[745]: time="2023-11-14T13:38:59.732673851Z" level=info msg="cleaning up dead shim"
	Nov 14 13:38:59 addons-135796 containerd[745]: time="2023-11-14T13:38:59.744780515Z" level=warning msg="cleanup warnings time=\"2023-11-14T13:38:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=12069 runtime=io.containerd.runc.v2\n"
	Nov 14 13:38:59 addons-135796 containerd[745]: time="2023-11-14T13:38:59.868971861Z" level=info msg="RemoveContainer for \"1928567ee1782baa1756739ce141d65f7b18ce2b4258321bb5a53da97973e3da\""
	Nov 14 13:38:59 addons-135796 containerd[745]: time="2023-11-14T13:38:59.882287440Z" level=info msg="RemoveContainer for \"1928567ee1782baa1756739ce141d65f7b18ce2b4258321bb5a53da97973e3da\" returns successfully"
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.671897106Z" level=info msg="Kill container \"66d6edb4c6583ccea1bbf9812414814936c0ce76d5eee85f8f60992f929e03c2\""
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.763090838Z" level=info msg="shim disconnected" id=66d6edb4c6583ccea1bbf9812414814936c0ce76d5eee85f8f60992f929e03c2
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.763158260Z" level=warning msg="cleaning up after shim disconnected" id=66d6edb4c6583ccea1bbf9812414814936c0ce76d5eee85f8f60992f929e03c2 namespace=k8s.io
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.763169312Z" level=info msg="cleaning up dead shim"
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.774640228Z" level=warning msg="cleanup warnings time=\"2023-11-14T13:39:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=12101 runtime=io.containerd.runc.v2\n"
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.778155294Z" level=info msg="StopContainer for \"66d6edb4c6583ccea1bbf9812414814936c0ce76d5eee85f8f60992f929e03c2\" returns successfully"
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.778724089Z" level=info msg="StopPodSandbox for \"2bab4ddad7d88c58ac4249f0cea1272db22d516e49cdf9c3ae3cae4dbb9ca213\""
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.778813024Z" level=info msg="Container to stop \"66d6edb4c6583ccea1bbf9812414814936c0ce76d5eee85f8f60992f929e03c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.814316787Z" level=info msg="shim disconnected" id=2bab4ddad7d88c58ac4249f0cea1272db22d516e49cdf9c3ae3cae4dbb9ca213
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.814381000Z" level=warning msg="cleaning up after shim disconnected" id=2bab4ddad7d88c58ac4249f0cea1272db22d516e49cdf9c3ae3cae4dbb9ca213 namespace=k8s.io
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.814391667Z" level=info msg="cleaning up dead shim"
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.826779044Z" level=warning msg="cleanup warnings time=\"2023-11-14T13:39:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=12132 runtime=io.containerd.runc.v2\n"
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.884124187Z" level=info msg="TearDown network for sandbox \"2bab4ddad7d88c58ac4249f0cea1272db22d516e49cdf9c3ae3cae4dbb9ca213\" successfully"
	Nov 14 13:39:00 addons-135796 containerd[745]: time="2023-11-14T13:39:00.884179670Z" level=info msg="StopPodSandbox for \"2bab4ddad7d88c58ac4249f0cea1272db22d516e49cdf9c3ae3cae4dbb9ca213\" returns successfully"
	
	* 
	* ==> coredns [1ffe826279e77fc4506412b1eebe92d26d0c4fa7b25457821e9df28b97bf7298] <==
	* [INFO] 10.244.0.18:44383 - 33110 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000261956s
	[INFO] 10.244.0.18:40885 - 55619 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043011s
	[INFO] 10.244.0.18:40885 - 26292 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000084701s
	[INFO] 10.244.0.18:40885 - 19605 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00005984s
	[INFO] 10.244.0.18:40885 - 39209 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001135086s
	[INFO] 10.244.0.18:40885 - 39860 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001072284s
	[INFO] 10.244.0.18:40885 - 52330 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000089173s
	[INFO] 10.244.0.18:48330 - 51601 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000151359s
	[INFO] 10.244.0.18:48330 - 2513 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000104304s
	[INFO] 10.244.0.18:45798 - 13136 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056s
	[INFO] 10.244.0.18:45798 - 16031 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000082634s
	[INFO] 10.244.0.18:48330 - 14578 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0001685s
	[INFO] 10.244.0.18:48330 - 21067 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000065379s
	[INFO] 10.244.0.18:45798 - 37336 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000053711s
	[INFO] 10.244.0.18:45798 - 54529 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000087319s
	[INFO] 10.244.0.18:48330 - 44265 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000125021s
	[INFO] 10.244.0.18:48330 - 53705 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00010185s
	[INFO] 10.244.0.18:45798 - 24375 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000088542s
	[INFO] 10.244.0.18:45798 - 22838 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000096779s
	[INFO] 10.244.0.18:45798 - 59597 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.009902314s
	[INFO] 10.244.0.18:48330 - 35650 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.011848129s
	[INFO] 10.244.0.18:45798 - 12828 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001565312s
	[INFO] 10.244.0.18:45798 - 30584 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000074076s
	[INFO] 10.244.0.18:48330 - 48408 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001926985s
	[INFO] 10.244.0.18:48330 - 57846 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000094392s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-135796
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-135796
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b
	                    minikube.k8s.io/name=addons-135796
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T13_35_32_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-135796
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 13:35:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-135796
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 13:38:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 13:38:35 +0000   Tue, 14 Nov 2023 13:35:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 13:38:35 +0000   Tue, 14 Nov 2023 13:35:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 13:38:35 +0000   Tue, 14 Nov 2023 13:35:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 13:38:35 +0000   Tue, 14 Nov 2023 13:35:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-135796
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 90ce7e5a856c42bfb82231974bc7cb3c
	  System UUID:                bb8fc27b-c1a3-460b-a8d0-bbccd39c1804
	  Boot ID:                    a87df0b0-e3c4-42f4-a7f5-31b7e72e6999
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-mdrs6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  gcp-auth                    gcp-auth-d4c87556c-wzj2h                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  headlamp                    headlamp-777fd4b855-mk5hh                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 coredns-5dd5756b68-kbgpr                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m22s
	  kube-system                 etcd-addons-135796                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m35s
	  kube-system                 kindnet-594t9                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m23s
	  kube-system                 kube-apiserver-addons-135796             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-controller-manager-addons-135796    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-proxy-bhsf2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	  kube-system                 kube-scheduler-addons-135796             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m20s  kube-proxy       
	  Normal  Starting                 3m35s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m35s  kubelet          Node addons-135796 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s  kubelet          Node addons-135796 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s  kubelet          Node addons-135796 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m35s  kubelet          Node addons-135796 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m35s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m35s  kubelet          Node addons-135796 status is now: NodeReady
	  Normal  RegisteredNode           3m23s  node-controller  Node addons-135796 event: Registered Node addons-135796 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001096] FS-Cache: O-key=[8] '9c3f5c0100000000'
	[  +0.000735] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000996] FS-Cache: N-cookie d=00000000c53663eb{9p.inode} n=00000000293e98e9
	[  +0.001092] FS-Cache: N-key=[8] '9c3f5c0100000000'
	[  +0.005238] FS-Cache: Duplicate cookie detected
	[  +0.000749] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001066] FS-Cache: O-cookie d=00000000c53663eb{9p.inode} n=000000006edf6e80
	[  +0.001086] FS-Cache: O-key=[8] '9c3f5c0100000000'
	[  +0.000736] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000993] FS-Cache: N-cookie d=00000000c53663eb{9p.inode} n=00000000d3a85785
	[  +0.001134] FS-Cache: N-key=[8] '9c3f5c0100000000'
	[  +3.026116] FS-Cache: Duplicate cookie detected
	[  +0.000755] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001113] FS-Cache: O-cookie d=00000000c53663eb{9p.inode} n=000000002a6e05b9
	[  +0.001115] FS-Cache: O-key=[8] '9b3f5c0100000000'
	[  +0.000744] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000989] FS-Cache: N-cookie d=00000000c53663eb{9p.inode} n=00000000293e98e9
	[  +0.001125] FS-Cache: N-key=[8] '9b3f5c0100000000'
	[  +0.397596] FS-Cache: Duplicate cookie detected
	[  +0.000764] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001125] FS-Cache: O-cookie d=00000000c53663eb{9p.inode} n=00000000020bf62c
	[  +0.001114] FS-Cache: O-key=[8] 'a13f5c0100000000'
	[  +0.000747] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000981] FS-Cache: N-cookie d=00000000c53663eb{9p.inode} n=00000000c28c9963
	[  +0.001122] FS-Cache: N-key=[8] 'a13f5c0100000000'
	
	* 
	* ==> etcd [a6c59cec296aabe07ed7e12503e25526d5a0c20f6f56879032c3baf270cd76c9] <==
	* {"level":"info","ts":"2023-11-14T13:35:24.589732Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T13:35:24.589773Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T13:35:24.589781Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T13:35:24.590291Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-14T13:35:24.590306Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-14T13:35:24.590688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-11-14T13:35:24.590761Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-11-14T13:35:25.360845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-14T13:35:25.361075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-14T13:35:25.361183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-11-14T13:35:25.36133Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-11-14T13:35:25.361412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-14T13:35:25.361503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-11-14T13:35:25.361582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-14T13:35:25.36389Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-135796 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T13:35:25.364168Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T13:35:25.364806Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T13:35:25.365829Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T13:35:25.366419Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-14T13:35:25.366532Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T13:35:25.372511Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T13:35:25.372646Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T13:35:25.372781Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T13:35:25.372884Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T13:35:25.376826Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> gcp-auth [895dc333c21f5cbb56dc393e3ebf587367483181e2ff27ad8c0bbd6a4c2f49ca] <==
	* 2023/11/14 13:37:02 GCP Auth Webhook started!
	2023/11/14 13:37:14 Ready to marshal response ...
	2023/11/14 13:37:14 Ready to write response ...
	2023/11/14 13:37:22 Ready to marshal response ...
	2023/11/14 13:37:22 Ready to write response ...
	2023/11/14 13:37:25 Ready to marshal response ...
	2023/11/14 13:37:25 Ready to write response ...
	2023/11/14 13:37:25 Ready to marshal response ...
	2023/11/14 13:37:25 Ready to write response ...
	2023/11/14 13:37:33 Ready to marshal response ...
	2023/11/14 13:37:33 Ready to write response ...
	2023/11/14 13:37:57 Ready to marshal response ...
	2023/11/14 13:37:57 Ready to write response ...
	2023/11/14 13:38:18 Ready to marshal response ...
	2023/11/14 13:38:18 Ready to write response ...
	2023/11/14 13:38:18 Ready to marshal response ...
	2023/11/14 13:38:18 Ready to write response ...
	2023/11/14 13:38:18 Ready to marshal response ...
	2023/11/14 13:38:18 Ready to write response ...
	2023/11/14 13:38:30 Ready to marshal response ...
	2023/11/14 13:38:30 Ready to write response ...
	2023/11/14 13:38:40 Ready to marshal response ...
	2023/11/14 13:38:40 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  13:39:06 up 10:21,  0 users,  load average: 0.91, 1.04, 1.45
	Linux addons-135796 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [0f9b74c79cc02ecc32ddd56d0ac4e53bdd1dfa5b5775a90e9a2e8ac5e2072443] <==
	* I1114 13:37:06.124342       1 main.go:227] handling current node
	I1114 13:37:16.136634       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:37:16.136973       1 main.go:227] handling current node
	I1114 13:37:26.150027       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:37:26.150054       1 main.go:227] handling current node
	I1114 13:37:36.166996       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:37:36.167024       1 main.go:227] handling current node
	I1114 13:37:46.178607       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:37:46.178636       1 main.go:227] handling current node
	I1114 13:37:56.187474       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:37:56.187504       1 main.go:227] handling current node
	I1114 13:38:06.200959       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:38:06.200985       1 main.go:227] handling current node
	I1114 13:38:16.205983       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:38:16.206018       1 main.go:227] handling current node
	I1114 13:38:26.210084       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:38:26.210112       1 main.go:227] handling current node
	I1114 13:38:36.225392       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:38:36.225422       1 main.go:227] handling current node
	I1114 13:38:46.229419       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:38:46.229447       1 main.go:227] handling current node
	I1114 13:38:56.242065       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:38:56.242093       1 main.go:227] handling current node
	I1114 13:39:06.255107       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:39:06.255143       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [82ac0684a5fa74cb010a3ce434547ff4f6da17b98c62ef09d5be338de0107e12] <==
	* I1114 13:38:13.542311       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:13.542534       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:13.557782       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:13.557927       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:13.572614       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:13.576848       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:13.585200       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:13.585311       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:13.601891       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:13.601953       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:13.618504       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:13.618557       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:13.619330       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:13.619447       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1114 13:38:14.573219       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1114 13:38:14.618736       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1114 13:38:14.638309       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1114 13:38:18.816346       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.126.98"}
	I1114 13:38:29.847011       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1114 13:38:30.293701       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.106.230.172"}
	I1114 13:38:31.057861       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1114 13:38:31.071672       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1114 13:38:32.101358       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1114 13:38:40.553255       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.5.185"}
	I1114 13:38:40.768722       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	* 
	* ==> kube-controller-manager [3a63089219e16d20f8d8861186b5e69aae32bd2422c1d9a3fe0352e3f488c372] <==
	* I1114 13:38:40.389183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.526259ms"
	I1114 13:38:40.389617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="33.69µs"
	I1114 13:38:40.389820       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="25.551µs"
	I1114 13:38:41.185072       1 namespace_controller.go:182] "Namespace has been deleted" namespace="gadget"
	W1114 13:38:41.887739       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 13:38:41.887780       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1114 13:38:42.825360       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="77.874µs"
	I1114 13:38:43.346700       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I1114 13:38:43.346745       1 shared_informer.go:318] Caches are synced for resource quota
	W1114 13:38:43.660446       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 13:38:43.660479       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1114 13:38:43.710262       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1114 13:38:43.710311       1 shared_informer.go:318] Caches are synced for garbage collector
	I1114 13:38:43.833244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.429µs"
	I1114 13:38:44.835466       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.987µs"
	W1114 13:38:48.093264       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 13:38:48.093302       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1114 13:38:51.575090       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 13:38:51.575128       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1114 13:38:52.274521       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 13:38:52.274555       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1114 13:38:57.642323       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="7.36µs"
	I1114 13:38:57.645678       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1114 13:38:57.654304       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1114 13:38:59.880084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="47.36µs"
	
	* 
	* ==> kube-proxy [8f6726f5f0c003f6fba101850957ac26ec25071f6f07f6365d659e63be9610c7] <==
	* I1114 13:35:45.629164       1 server_others.go:69] "Using iptables proxy"
	I1114 13:35:45.646405       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1114 13:35:45.713263       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1114 13:35:45.716680       1 server_others.go:152] "Using iptables Proxier"
	I1114 13:35:45.716719       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1114 13:35:45.716727       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1114 13:35:45.716900       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 13:35:45.717154       1 server.go:846] "Version info" version="v1.28.3"
	I1114 13:35:45.717166       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 13:35:45.719713       1 config.go:188] "Starting service config controller"
	I1114 13:35:45.719746       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 13:35:45.719770       1 config.go:97] "Starting endpoint slice config controller"
	I1114 13:35:45.719774       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 13:35:45.728139       1 config.go:315] "Starting node config controller"
	I1114 13:35:45.728160       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 13:35:45.821459       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 13:35:45.821510       1 shared_informer.go:318] Caches are synced for service config
	I1114 13:35:45.832083       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [355f5252ffd931d18a07fbe7b47df3a4ad27ce5c22ac1a04f076cf71a6c43e7f] <==
	* W1114 13:35:28.267406       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 13:35:28.267444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1114 13:35:28.267535       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 13:35:28.267555       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1114 13:35:28.267640       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 13:35:28.267655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1114 13:35:28.267716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 13:35:28.267737       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1114 13:35:28.267882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 13:35:28.267903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1114 13:35:28.268066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 13:35:28.268596       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1114 13:35:29.082811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 13:35:29.082845       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1114 13:35:29.120378       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 13:35:29.120431       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 13:35:29.154695       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 13:35:29.154729       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1114 13:35:29.213923       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 13:35:29.213964       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1114 13:35:29.352641       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 13:35:29.352882       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1114 13:35:29.366818       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 13:35:29.366861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1114 13:35:31.112379       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 14 13:38:44 addons-135796 kubelet[1352]: I1114 13:38:44.821886    1352 scope.go:117] "RemoveContainer" containerID="1928567ee1782baa1756739ce141d65f7b18ce2b4258321bb5a53da97973e3da"
	Nov 14 13:38:44 addons-135796 kubelet[1352]: E1114 13:38:44.822172    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-mdrs6_default(a46389c5-5314-4892-b3c7-ee36f2103d5b)\"" pod="default/hello-world-app-5d77478584-mdrs6" podUID="a46389c5-5314-4892-b3c7-ee36f2103d5b"
	Nov 14 13:38:52 addons-135796 kubelet[1352]: I1114 13:38:52.594000    1352 scope.go:117] "RemoveContainer" containerID="287c2d2f4ab5efabdcd8473c71791d58a0e789f065647ce120ea8140fe7e4050"
	Nov 14 13:38:52 addons-135796 kubelet[1352]: I1114 13:38:52.842448    1352 scope.go:117] "RemoveContainer" containerID="287c2d2f4ab5efabdcd8473c71791d58a0e789f065647ce120ea8140fe7e4050"
	Nov 14 13:38:52 addons-135796 kubelet[1352]: I1114 13:38:52.842763    1352 scope.go:117] "RemoveContainer" containerID="09be99cf9392dcc19cbb95c8184b881474cf2aa748102bbbf9b6b1fcfe61d7e3"
	Nov 14 13:38:52 addons-135796 kubelet[1352]: E1114 13:38:52.843009    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6"
	Nov 14 13:38:56 addons-135796 kubelet[1352]: I1114 13:38:56.584225    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqpng\" (UniqueName: \"kubernetes.io/projected/15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6-kube-api-access-cqpng\") pod \"15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6\" (UID: \"15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6\") "
	Nov 14 13:38:56 addons-135796 kubelet[1352]: I1114 13:38:56.589021    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6-kube-api-access-cqpng" (OuterVolumeSpecName: "kube-api-access-cqpng") pod "15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6" (UID: "15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6"). InnerVolumeSpecName "kube-api-access-cqpng". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 14 13:38:56 addons-135796 kubelet[1352]: I1114 13:38:56.684784    1352 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cqpng\" (UniqueName: \"kubernetes.io/projected/15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6-kube-api-access-cqpng\") on node \"addons-135796\" DevicePath \"\""
	Nov 14 13:38:56 addons-135796 kubelet[1352]: I1114 13:38:56.856193    1352 scope.go:117] "RemoveContainer" containerID="09be99cf9392dcc19cbb95c8184b881474cf2aa748102bbbf9b6b1fcfe61d7e3"
	Nov 14 13:38:57 addons-135796 kubelet[1352]: I1114 13:38:57.597235    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6" path="/var/lib/kubelet/pods/15e0f8be-4f6c-431b-ae5a-1f67bf8a22a6/volumes"
	Nov 14 13:38:59 addons-135796 kubelet[1352]: I1114 13:38:59.594442    1352 scope.go:117] "RemoveContainer" containerID="1928567ee1782baa1756739ce141d65f7b18ce2b4258321bb5a53da97973e3da"
	Nov 14 13:38:59 addons-135796 kubelet[1352]: I1114 13:38:59.597971    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0262b214-0016-45a8-8627-b161731bf9ac" path="/var/lib/kubelet/pods/0262b214-0016-45a8-8627-b161731bf9ac/volumes"
	Nov 14 13:38:59 addons-135796 kubelet[1352]: I1114 13:38:59.598552    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="42779909-5459-4728-ac73-9165b258f8a2" path="/var/lib/kubelet/pods/42779909-5459-4728-ac73-9165b258f8a2/volumes"
	Nov 14 13:38:59 addons-135796 kubelet[1352]: I1114 13:38:59.866603    1352 scope.go:117] "RemoveContainer" containerID="1928567ee1782baa1756739ce141d65f7b18ce2b4258321bb5a53da97973e3da"
	Nov 14 13:38:59 addons-135796 kubelet[1352]: I1114 13:38:59.866981    1352 scope.go:117] "RemoveContainer" containerID="2ad703c5715d5c0f617441e835d2415e4582eb15d2201c1d5eab5cb8272d941e"
	Nov 14 13:38:59 addons-135796 kubelet[1352]: E1114 13:38:59.867257    1352 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-mdrs6_default(a46389c5-5314-4892-b3c7-ee36f2103d5b)\"" pod="default/hello-world-app-5d77478584-mdrs6" podUID="a46389c5-5314-4892-b3c7-ee36f2103d5b"
	Nov 14 13:39:00 addons-135796 kubelet[1352]: I1114 13:39:00.870210    1352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bab4ddad7d88c58ac4249f0cea1272db22d516e49cdf9c3ae3cae4dbb9ca213"
	Nov 14 13:39:01 addons-135796 kubelet[1352]: I1114 13:39:01.020429    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ee1b3755-f4d8-486b-9828-c40aa12b14cc-webhook-cert\") pod \"ee1b3755-f4d8-486b-9828-c40aa12b14cc\" (UID: \"ee1b3755-f4d8-486b-9828-c40aa12b14cc\") "
	Nov 14 13:39:01 addons-135796 kubelet[1352]: I1114 13:39:01.020508    1352 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spqd7\" (UniqueName: \"kubernetes.io/projected/ee1b3755-f4d8-486b-9828-c40aa12b14cc-kube-api-access-spqd7\") pod \"ee1b3755-f4d8-486b-9828-c40aa12b14cc\" (UID: \"ee1b3755-f4d8-486b-9828-c40aa12b14cc\") "
	Nov 14 13:39:01 addons-135796 kubelet[1352]: I1114 13:39:01.023430    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee1b3755-f4d8-486b-9828-c40aa12b14cc-kube-api-access-spqd7" (OuterVolumeSpecName: "kube-api-access-spqd7") pod "ee1b3755-f4d8-486b-9828-c40aa12b14cc" (UID: "ee1b3755-f4d8-486b-9828-c40aa12b14cc"). InnerVolumeSpecName "kube-api-access-spqd7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 14 13:39:01 addons-135796 kubelet[1352]: I1114 13:39:01.025320    1352 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee1b3755-f4d8-486b-9828-c40aa12b14cc-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ee1b3755-f4d8-486b-9828-c40aa12b14cc" (UID: "ee1b3755-f4d8-486b-9828-c40aa12b14cc"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 14 13:39:01 addons-135796 kubelet[1352]: I1114 13:39:01.121773    1352 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ee1b3755-f4d8-486b-9828-c40aa12b14cc-webhook-cert\") on node \"addons-135796\" DevicePath \"\""
	Nov 14 13:39:01 addons-135796 kubelet[1352]: I1114 13:39:01.121820    1352 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-spqd7\" (UniqueName: \"kubernetes.io/projected/ee1b3755-f4d8-486b-9828-c40aa12b14cc-kube-api-access-spqd7\") on node \"addons-135796\" DevicePath \"\""
	Nov 14 13:39:01 addons-135796 kubelet[1352]: I1114 13:39:01.597326    1352 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ee1b3755-f4d8-486b-9828-c40aa12b14cc" path="/var/lib/kubelet/pods/ee1b3755-f4d8-486b-9828-c40aa12b14cc/volumes"
	
	* 
	* ==> storage-provisioner [8ee42b5af0ad58f18d882c7f795d580d416d4fd3511f7a1979670be6f6029e22] <==
	* I1114 13:35:50.954599       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 13:35:50.981895       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 13:35:50.982020       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 13:35:51.001051       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 13:35:51.001274       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-135796_87b06d8d-b748-489f-94dc-c5d37b508c4d!
	I1114 13:35:51.002565       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ddadb943-9860-4700-93fe-1731c0843f05", APIVersion:"v1", ResourceVersion:"609", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-135796_87b06d8d-b748-489f-94dc-c5d37b508c4d became leader
	I1114 13:35:51.117067       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-135796_87b06d8d-b748-489f-94dc-c5d37b508c4d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-135796 -n addons-135796
helpers_test.go:261: (dbg) Run:  kubectl --context addons-135796 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (38.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 logs --file /tmp/TestFunctionalserialLogsFileCmd1482034956/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-927562 logs --file /tmp/TestFunctionalserialLogsFileCmd1482034956/001/logs.txt: (1.927280768s)
functional_test.go:1251: expected empty minikube logs output, but got: 
***
-- stdout --
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 13:43:24.605083 1280306 logs.go:195] command /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 f92fc0d909b634edc58ba0cedd8339c4a0f84d2368afc52881bd97b9a6158bdc" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 60 f92fc0d909b634edc58ba0cedd8339c4a0f84d2368afc52881bd97b9a6158bdc": Process exited with status 1
	stdout:
	
	stderr:
	time="2023-11-14T13:43:24Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-functional-927562_1463dd98028e667a1cb00eb4a4d3f7cf/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-functional-927562_1463dd98028e667a1cb00eb4a4d3f7cf/kube-scheduler/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2023-11-14T13:43:24Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-functional-927562_1463dd98028e667a1cb00eb4a4d3f7cf/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-functional-927562_1463dd98028e667a1cb00eb4a4d3f7cf/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: kube-scheduler [f92fc0d909b634edc58ba0cedd8339c4a0f84d2368afc52881bd97b9a6158bdc]

                                                
                                                
** /stderr *****
--- FAIL: TestFunctional/serial/LogsFileCmd (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image load --daemon gcr.io/google-containers/addon-resizer:functional-927562 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-927562 image load --daemon gcr.io/google-containers/addon-resizer:functional-927562 --alsologtostderr: (4.453861896s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-927562" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image load --daemon gcr.io/google-containers/addon-resizer:functional-927562 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-927562 image load --daemon gcr.io/google-containers/addon-resizer:functional-927562 --alsologtostderr: (3.516628279s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-927562" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.923882609s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-927562
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image load --daemon gcr.io/google-containers/addon-resizer:functional-927562 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-927562 image load --daemon gcr.io/google-containers/addon-resizer:functional-927562 --alsologtostderr: (3.210683012s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-927562" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image save gcr.io/google-containers/addon-resizer:functional-927562 /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1114 13:44:27.055857 1285695 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:44:27.056528 1285695 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:27.056545 1285695 out.go:309] Setting ErrFile to fd 2...
	I1114 13:44:27.056552 1285695 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:27.056967 1285695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
	I1114 13:44:27.058330 1285695 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1114 13:44:27.058588 1285695 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1114 13:44:27.059162 1285695 cli_runner.go:164] Run: docker container inspect functional-927562 --format={{.State.Status}}
	I1114 13:44:27.079328 1285695 ssh_runner.go:195] Run: systemctl --version
	I1114 13:44:27.079430 1285695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-927562
	I1114 13:44:27.100457 1285695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34347 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/functional-927562/id_rsa Username:docker}
	I1114 13:44:27.198761 1285695 cache_images.go:286] Loading image from: /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar
	W1114 13:44:27.198848 1285695 cache_images.go:254] Failed to load cached images for profile functional-927562. make sure the profile is running. loading images: stat /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar: no such file or directory
	I1114 13:44:27.198865 1285695 cache_images.go:262] succeeded pushing to: 
	I1114 13:44:27.198875 1285695 cache_images.go:263] failed pushing to: functional-927562

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (54.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-011886 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-011886 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.751459363s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-011886 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-011886 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d875c040-39bc-41a8-bf4a-4ac6043bc8c1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d875c040-39bc-41a8-bf4a-4ac6043bc8c1] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.037358196s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-011886 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-011886 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-011886 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.02234443s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-011886 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-011886 addons disable ingress-dns --alsologtostderr -v=1: (4.57205125s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-011886 addons disable ingress --alsologtostderr -v=1
E1114 13:47:03.997679 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-011886 addons disable ingress --alsologtostderr -v=1: (7.609044655s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-011886
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-011886:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dea9ad3e4aae96fee5242f006012b9a669a8f62cb64b7aef64ed7f822b28cf01",
	        "Created": "2023-11-14T13:44:52.921341277Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1286848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-14T13:44:53.297714978Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/dea9ad3e4aae96fee5242f006012b9a669a8f62cb64b7aef64ed7f822b28cf01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dea9ad3e4aae96fee5242f006012b9a669a8f62cb64b7aef64ed7f822b28cf01/hostname",
	        "HostsPath": "/var/lib/docker/containers/dea9ad3e4aae96fee5242f006012b9a669a8f62cb64b7aef64ed7f822b28cf01/hosts",
	        "LogPath": "/var/lib/docker/containers/dea9ad3e4aae96fee5242f006012b9a669a8f62cb64b7aef64ed7f822b28cf01/dea9ad3e4aae96fee5242f006012b9a669a8f62cb64b7aef64ed7f822b28cf01-json.log",
	        "Name": "/ingress-addon-legacy-011886",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-011886:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-011886",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8e82e9bf4eeb200f4b46c06ec41b167d64d737da1d48ceae07fb2d7e32eaab97-init/diff:/var/lib/docker/overlay2/64458dfae02165ba5e5b32269df54406638d6ee619cc4ae1d257dd52e6bbd2d5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8e82e9bf4eeb200f4b46c06ec41b167d64d737da1d48ceae07fb2d7e32eaab97/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8e82e9bf4eeb200f4b46c06ec41b167d64d737da1d48ceae07fb2d7e32eaab97/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8e82e9bf4eeb200f4b46c06ec41b167d64d737da1d48ceae07fb2d7e32eaab97/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-011886",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-011886/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-011886",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-011886",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-011886",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f90fcf3e7d0e788726228798f6e715acae40c1fbaf82bcb25e7917c45c8405c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34352"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34351"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34348"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34350"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34349"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9f90fcf3e7d0",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-011886": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "dea9ad3e4aae",
	                        "ingress-addon-legacy-011886"
	                    ],
	                    "NetworkID": "55035657473d3f75b3ac3300b8445b46bf151b74886a32c89c997dfcb31052c8",
	                    "EndpointID": "cc99b1ebc589d0e908227272f26538482c8fc782fa0d2c55d2f64bf12f84992a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-011886 -n ingress-addon-legacy-011886
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-011886 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-011886 logs -n 25: (1.42924693s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                     Args                                     |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-927562                                                            | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| update-context | functional-927562                                                            | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | update-context                                                               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                       |                             |         |         |                     |                     |
	| image          | functional-927562 image ls                                                   | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	| image          | functional-927562 image load --daemon                                        | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-927562                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-927562 image ls                                                   | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	| image          | functional-927562 image save                                                 | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-927562                     |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-927562 image rm                                                   | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-927562                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-927562 image ls                                                   | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	| image          | functional-927562 image load                                                 | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_containerd_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-927562 image save --daemon                                        | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-927562                     |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-927562                                                            | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | image ls --format yaml                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-927562                                                            | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | image ls --format short                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| ssh            | functional-927562 ssh pgrep                                                  | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC |                     |
	|                | buildkitd                                                                    |                             |         |         |                     |                     |
	| image          | functional-927562                                                            | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | image ls --format json                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-927562                                                            | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | image ls --format table                                                      |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	| image          | functional-927562 image build -t                                             | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	|                | localhost/my-image:functional-927562                                         |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                             |                             |         |         |                     |                     |
	| image          | functional-927562 image ls                                                   | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	| delete         | -p functional-927562                                                         | functional-927562           | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:44 UTC |
	| start          | -p ingress-addon-legacy-011886                                               | ingress-addon-legacy-011886 | jenkins | v1.32.0 | 14 Nov 23 13:44 UTC | 14 Nov 23 13:46 UTC |
	|                | --kubernetes-version=v1.18.20                                                |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                            |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                         |                             |         |         |                     |                     |
	|                | --container-runtime=containerd                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-011886                                                  | ingress-addon-legacy-011886 | jenkins | v1.32.0 | 14 Nov 23 13:46 UTC | 14 Nov 23 13:46 UTC |
	|                | addons enable ingress                                                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-011886                                                  | ingress-addon-legacy-011886 | jenkins | v1.32.0 | 14 Nov 23 13:46 UTC | 14 Nov 23 13:46 UTC |
	|                | addons enable ingress-dns                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                       |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-011886                                                  | ingress-addon-legacy-011886 | jenkins | v1.32.0 | 14 Nov 23 13:46 UTC | 14 Nov 23 13:46 UTC |
	|                | ssh curl -s http://127.0.0.1/                                                |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                                 |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-011886 ip                                               | ingress-addon-legacy-011886 | jenkins | v1.32.0 | 14 Nov 23 13:46 UTC | 14 Nov 23 13:46 UTC |
	| addons         | ingress-addon-legacy-011886                                                  | ingress-addon-legacy-011886 | jenkins | v1.32.0 | 14 Nov 23 13:46 UTC | 14 Nov 23 13:47 UTC |
	|                | addons disable ingress-dns                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-011886                                                  | ingress-addon-legacy-011886 | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	|                | addons disable ingress                                                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                       |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:44:34
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:44:34.218513 1286388 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:44:34.218959 1286388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:34.218971 1286388 out.go:309] Setting ErrFile to fd 2...
	I1114 13:44:34.218978 1286388 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:34.219303 1286388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
	I1114 13:44:34.219789 1286388 out.go:303] Setting JSON to false
	I1114 13:44:34.220929 1286388 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":37621,"bootTime":1699931854,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1114 13:44:34.221041 1286388 start.go:138] virtualization:  
	I1114 13:44:34.223571 1286388 out.go:177] * [ingress-addon-legacy-011886] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 13:44:34.226118 1286388 notify.go:220] Checking for updates...
	I1114 13:44:34.226079 1286388 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:44:34.229753 1286388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:44:34.231789 1286388 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	I1114 13:44:34.233910 1286388 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	I1114 13:44:34.235633 1286388 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 13:44:34.237308 1286388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:44:34.239162 1286388 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:44:34.264674 1286388 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:44:34.264810 1286388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:44:34.364196 1286388 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-14 13:44:34.353183039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:44:34.364322 1286388 docker.go:295] overlay module found
	I1114 13:44:34.368264 1286388 out.go:177] * Using the docker driver based on user configuration
	I1114 13:44:34.370241 1286388 start.go:298] selected driver: docker
	I1114 13:44:34.370270 1286388 start.go:902] validating driver "docker" against <nil>
	I1114 13:44:34.370284 1286388 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:44:34.370939 1286388 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:44:34.449439 1286388 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-14 13:44:34.438896611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:44:34.449618 1286388 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 13:44:34.449883 1286388 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 13:44:34.452370 1286388 out.go:177] * Using Docker driver with root privileges
	I1114 13:44:34.454593 1286388 cni.go:84] Creating CNI manager for ""
	I1114 13:44:34.454624 1286388 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1114 13:44:34.454638 1286388 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1114 13:44:34.454650 1286388 start_flags.go:323] config:
	{Name:ingress-addon-legacy-011886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-011886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:44:34.458571 1286388 out.go:177] * Starting control plane node ingress-addon-legacy-011886 in cluster ingress-addon-legacy-011886
	I1114 13:44:34.460694 1286388 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1114 13:44:34.462835 1286388 out.go:177] * Pulling base image ...
	I1114 13:44:34.464815 1286388 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1114 13:44:34.465013 1286388 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1114 13:44:34.486500 1286388 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1114 13:44:34.486531 1286388 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in daemon, skipping load
	I1114 13:44:34.528704 1286388 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1114 13:44:34.528730 1286388 cache.go:56] Caching tarball of preloaded images
	I1114 13:44:34.528923 1286388 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1114 13:44:34.531179 1286388 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1114 13:44:34.533108 1286388 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1114 13:44:34.648672 1286388 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4?checksum=md5:9e505be2989b8c051b1372c317471064 -> /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4
	I1114 13:44:44.959862 1286388 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1114 13:44:44.960677 1286388 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 ...
	I1114 13:44:46.189103 1286388 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on containerd
	I1114 13:44:46.189528 1286388 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/config.json ...
	I1114 13:44:46.189565 1286388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/config.json: {Name:mka788bfeeffdaa34d40c50a03735184ff478210 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:44:46.189774 1286388 cache.go:194] Successfully downloaded all kic artifacts
	I1114 13:44:46.189834 1286388 start.go:365] acquiring machines lock for ingress-addon-legacy-011886: {Name:mkf19e3f621ca1c6a8365f401a8d21022d7a033e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 13:44:46.189902 1286388 start.go:369] acquired machines lock for "ingress-addon-legacy-011886" in 51.019µs
	I1114 13:44:46.189925 1286388 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-011886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-011886 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1114 13:44:46.189994 1286388 start.go:125] createHost starting for "" (driver="docker")
	I1114 13:44:46.192336 1286388 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1114 13:44:46.192584 1286388 start.go:159] libmachine.API.Create for "ingress-addon-legacy-011886" (driver="docker")
	I1114 13:44:46.192628 1286388 client.go:168] LocalClient.Create starting
	I1114 13:44:46.192691 1286388 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem
	I1114 13:44:46.192729 1286388 main.go:141] libmachine: Decoding PEM data...
	I1114 13:44:46.192749 1286388 main.go:141] libmachine: Parsing certificate...
	I1114 13:44:46.192833 1286388 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/cert.pem
	I1114 13:44:46.192860 1286388 main.go:141] libmachine: Decoding PEM data...
	I1114 13:44:46.192876 1286388 main.go:141] libmachine: Parsing certificate...
	I1114 13:44:46.193246 1286388 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-011886 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1114 13:44:46.211287 1286388 cli_runner.go:211] docker network inspect ingress-addon-legacy-011886 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1114 13:44:46.211387 1286388 network_create.go:281] running [docker network inspect ingress-addon-legacy-011886] to gather additional debugging logs...
	I1114 13:44:46.211410 1286388 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-011886
	W1114 13:44:46.228147 1286388 cli_runner.go:211] docker network inspect ingress-addon-legacy-011886 returned with exit code 1
	I1114 13:44:46.228186 1286388 network_create.go:284] error running [docker network inspect ingress-addon-legacy-011886]: docker network inspect ingress-addon-legacy-011886: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-011886 not found
	I1114 13:44:46.228200 1286388 network_create.go:286] output of [docker network inspect ingress-addon-legacy-011886]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-011886 not found
	
	** /stderr **
	I1114 13:44:46.228329 1286388 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 13:44:46.246647 1286388 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400000e8d0}
	I1114 13:44:46.246686 1286388 network_create.go:124] attempt to create docker network ingress-addon-legacy-011886 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1114 13:44:46.246756 1286388 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-011886 ingress-addon-legacy-011886
	I1114 13:44:46.322473 1286388 network_create.go:108] docker network ingress-addon-legacy-011886 192.168.49.0/24 created
	I1114 13:44:46.322511 1286388 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-011886" container
	I1114 13:44:46.322600 1286388 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1114 13:44:46.340415 1286388 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-011886 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-011886 --label created_by.minikube.sigs.k8s.io=true
	I1114 13:44:46.358737 1286388 oci.go:103] Successfully created a docker volume ingress-addon-legacy-011886
	I1114 13:44:46.358825 1286388 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-011886-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-011886 --entrypoint /usr/bin/test -v ingress-addon-legacy-011886:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1114 13:44:47.907795 1286388 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-011886-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-011886 --entrypoint /usr/bin/test -v ingress-addon-legacy-011886:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib: (1.548932159s)
	I1114 13:44:47.907830 1286388 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-011886
	I1114 13:44:47.907868 1286388 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1114 13:44:47.907893 1286388 kic.go:194] Starting extracting preloaded images to volume ...
	I1114 13:44:47.907986 1286388 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-011886:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1114 13:44:52.837340 1286388 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-011886:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.929308585s)
	I1114 13:44:52.837375 1286388 kic.go:203] duration metric: took 4.929478 seconds to extract preloaded images to volume
	W1114 13:44:52.837533 1286388 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1114 13:44:52.837645 1286388 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1114 13:44:52.905490 1286388 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-011886 --name ingress-addon-legacy-011886 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-011886 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-011886 --network ingress-addon-legacy-011886 --ip 192.168.49.2 --volume ingress-addon-legacy-011886:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1114 13:44:53.307938 1286388 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-011886 --format={{.State.Running}}
	I1114 13:44:53.333365 1286388 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-011886 --format={{.State.Status}}
	I1114 13:44:53.367481 1286388 cli_runner.go:164] Run: docker exec ingress-addon-legacy-011886 stat /var/lib/dpkg/alternatives/iptables
	I1114 13:44:53.432498 1286388 oci.go:144] the created container "ingress-addon-legacy-011886" has a running status.
	I1114 13:44:53.432525 1286388 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/ingress-addon-legacy-011886/id_rsa...
	I1114 13:44:54.041880 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/ingress-addon-legacy-011886/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1114 13:44:54.041988 1286388 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/ingress-addon-legacy-011886/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1114 13:44:54.079979 1286388 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-011886 --format={{.State.Status}}
	I1114 13:44:54.124122 1286388 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1114 13:44:54.124144 1286388 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-011886 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1114 13:44:54.233726 1286388 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-011886 --format={{.State.Status}}
	I1114 13:44:54.257572 1286388 machine.go:88] provisioning docker machine ...
	I1114 13:44:54.257605 1286388 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-011886"
	I1114 13:44:54.257706 1286388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-011886
	I1114 13:44:54.282731 1286388 main.go:141] libmachine: Using SSH client type: native
	I1114 13:44:54.283231 1286388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34352 <nil> <nil>}
	I1114 13:44:54.283256 1286388 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-011886 && echo "ingress-addon-legacy-011886" | sudo tee /etc/hostname
	I1114 13:44:54.479465 1286388 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-011886
	
	I1114 13:44:54.479613 1286388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-011886
	I1114 13:44:54.505333 1286388 main.go:141] libmachine: Using SSH client type: native
	I1114 13:44:54.505746 1286388 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34352 <nil> <nil>}
	I1114 13:44:54.505775 1286388 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-011886' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-011886/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-011886' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 13:44:54.655367 1286388 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 13:44:54.655397 1286388 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17581-1246551/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-1246551/.minikube}
	I1114 13:44:54.655448 1286388 ubuntu.go:177] setting up certificates
	I1114 13:44:54.655458 1286388 provision.go:83] configureAuth start
	I1114 13:44:54.655534 1286388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-011886
	I1114 13:44:54.675929 1286388 provision.go:138] copyHostCerts
	I1114 13:44:54.675974 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.pem
	I1114 13:44:54.676006 1286388 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.pem, removing ...
	I1114 13:44:54.676019 1286388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.pem
	I1114 13:44:54.676097 1286388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.pem (1078 bytes)
	I1114 13:44:54.676181 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17581-1246551/.minikube/cert.pem
	I1114 13:44:54.676203 1286388 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1246551/.minikube/cert.pem, removing ...
	I1114 13:44:54.676213 1286388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1246551/.minikube/cert.pem
	I1114 13:44:54.676241 1286388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-1246551/.minikube/cert.pem (1123 bytes)
	I1114 13:44:54.676285 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17581-1246551/.minikube/key.pem
	I1114 13:44:54.676306 1286388 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1246551/.minikube/key.pem, removing ...
	I1114 13:44:54.676313 1286388 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1246551/.minikube/key.pem
	I1114 13:44:54.676339 1286388 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-1246551/.minikube/key.pem (1679 bytes)
	I1114 13:44:54.676388 1286388 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-011886 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-011886]
	I1114 13:44:55.283593 1286388 provision.go:172] copyRemoteCerts
	I1114 13:44:55.283676 1286388 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 13:44:55.283758 1286388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-011886
	I1114 13:44:55.301965 1286388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34352 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/ingress-addon-legacy-011886/id_rsa Username:docker}
	I1114 13:44:55.403977 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 13:44:55.404039 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1114 13:44:55.433925 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 13:44:55.433988 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1114 13:44:55.464136 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 13:44:55.464203 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 13:44:55.494656 1286388 provision.go:86] duration metric: configureAuth took 839.179883ms
	I1114 13:44:55.494688 1286388 ubuntu.go:193] setting minikube options for container-runtime
	I1114 13:44:55.494890 1286388 config.go:182] Loaded profile config "ingress-addon-legacy-011886": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1114 13:44:55.494913 1286388 machine.go:91] provisioned docker machine in 1.237313048s
	I1114 13:44:55.494920 1286388 client.go:171] LocalClient.Create took 9.302283562s
	I1114 13:44:55.494938 1286388 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-011886" took 9.302357153s
	I1114 13:44:55.494951 1286388 start.go:300] post-start starting for "ingress-addon-legacy-011886" (driver="docker")
	I1114 13:44:55.494960 1286388 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 13:44:55.495023 1286388 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 13:44:55.495069 1286388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-011886
	I1114 13:44:55.514300 1286388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34352 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/ingress-addon-legacy-011886/id_rsa Username:docker}
	I1114 13:44:55.615946 1286388 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 13:44:55.620212 1286388 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1114 13:44:55.620262 1286388 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1114 13:44:55.620302 1286388 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1114 13:44:55.620317 1286388 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1114 13:44:55.620329 1286388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1246551/.minikube/addons for local assets ...
	I1114 13:44:55.620399 1286388 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1246551/.minikube/files for local assets ...
	I1114 13:44:55.620482 1286388 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-1246551/.minikube/files/etc/ssl/certs/12519052.pem -> 12519052.pem in /etc/ssl/certs
	I1114 13:44:55.620494 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/files/etc/ssl/certs/12519052.pem -> /etc/ssl/certs/12519052.pem
	I1114 13:44:55.620607 1286388 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 13:44:55.631458 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/files/etc/ssl/certs/12519052.pem --> /etc/ssl/certs/12519052.pem (1708 bytes)
	I1114 13:44:55.661476 1286388 start.go:303] post-start completed in 166.510202ms
	I1114 13:44:55.661851 1286388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-011886
	I1114 13:44:55.680032 1286388 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/config.json ...
	I1114 13:44:55.680321 1286388 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 13:44:55.680376 1286388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-011886
	I1114 13:44:55.698321 1286388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34352 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/ingress-addon-legacy-011886/id_rsa Username:docker}
	I1114 13:44:55.795732 1286388 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1114 13:44:55.801937 1286388 start.go:128] duration metric: createHost completed in 9.611927094s
	I1114 13:44:55.801961 1286388 start.go:83] releasing machines lock for "ingress-addon-legacy-011886", held for 9.612048078s
	I1114 13:44:55.802037 1286388 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-011886
	I1114 13:44:55.821265 1286388 ssh_runner.go:195] Run: cat /version.json
	I1114 13:44:55.821322 1286388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-011886
	I1114 13:44:55.821554 1286388 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 13:44:55.821618 1286388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-011886
	I1114 13:44:55.842718 1286388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34352 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/ingress-addon-legacy-011886/id_rsa Username:docker}
	I1114 13:44:55.849140 1286388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34352 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/ingress-addon-legacy-011886/id_rsa Username:docker}
	I1114 13:44:56.067901 1286388 ssh_runner.go:195] Run: systemctl --version
	I1114 13:44:56.074105 1286388 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 13:44:56.080170 1286388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1114 13:44:56.112353 1286388 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1114 13:44:56.112438 1286388 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 13:44:56.149984 1286388 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1114 13:44:56.150050 1286388 start.go:472] detecting cgroup driver to use...
	I1114 13:44:56.150099 1286388 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1114 13:44:56.150158 1286388 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1114 13:44:56.165475 1286388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1114 13:44:56.180247 1286388 docker.go:203] disabling cri-docker service (if available) ...
	I1114 13:44:56.180318 1286388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 13:44:56.197602 1286388 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 13:44:56.215024 1286388 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 13:44:56.308840 1286388 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 13:44:56.417241 1286388 docker.go:219] disabling docker service ...
	I1114 13:44:56.417366 1286388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 13:44:56.441893 1286388 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 13:44:56.458217 1286388 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 13:44:56.555477 1286388 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 13:44:56.659270 1286388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 13:44:56.672883 1286388 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 13:44:56.694958 1286388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1114 13:44:56.707381 1286388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1114 13:44:56.720217 1286388 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1114 13:44:56.720311 1286388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1114 13:44:56.732597 1286388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 13:44:56.745384 1286388 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1114 13:44:56.758343 1286388 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1114 13:44:56.772197 1286388 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 13:44:56.784611 1286388 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1114 13:44:56.798379 1286388 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 13:44:56.809581 1286388 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 13:44:56.820309 1286388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:44:56.912019 1286388 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1114 13:44:57.060963 1286388 start.go:519] Will wait 60s for socket path /run/containerd/containerd.sock
	I1114 13:44:57.061089 1286388 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1114 13:44:57.066435 1286388 start.go:540] Will wait 60s for crictl version
	I1114 13:44:57.066560 1286388 ssh_runner.go:195] Run: which crictl
	I1114 13:44:57.071812 1286388 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 13:44:57.118137 1286388 start.go:556] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.6.24
	RuntimeApiVersion:  v1
	I1114 13:44:57.118257 1286388 ssh_runner.go:195] Run: containerd --version
	I1114 13:44:57.147309 1286388 ssh_runner.go:195] Run: containerd --version
	I1114 13:44:57.178290 1286388 out.go:177] * Preparing Kubernetes v1.18.20 on containerd 1.6.24 ...
	I1114 13:44:57.180378 1286388 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-011886 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 13:44:57.198307 1286388 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1114 13:44:57.203253 1286388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:44:57.217529 1286388 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime containerd
	I1114 13:44:57.217600 1286388 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 13:44:57.260603 1286388 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1114 13:44:57.260687 1286388 ssh_runner.go:195] Run: which lz4
	I1114 13:44:57.265254 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1114 13:44:57.265359 1286388 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 13:44:57.270058 1286388 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 13:44:57.270095 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-containerd-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (489149349 bytes)
	I1114 13:44:59.464545 1286388 containerd.go:547] Took 2.199228 seconds to copy over tarball
	I1114 13:44:59.464656 1286388 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 13:45:02.774370 1286388 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.309663103s)
	I1114 13:45:02.774414 1286388 containerd.go:554] Took 3.309802 seconds to extract the tarball
	I1114 13:45:02.774426 1286388 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 13:45:02.863057 1286388 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:45:02.966454 1286388 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1114 13:45:03.125102 1286388 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 13:45:03.180526 1286388 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1114 13:45:03.180555 1286388 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 13:45:03.180608 1286388 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:45:03.180661 1286388 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:45:03.180918 1286388 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:45:03.181023 1286388 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:45:03.181125 1286388 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:45:03.181338 1286388 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1114 13:45:03.181463 1286388 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1114 13:45:03.181622 1286388 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1114 13:45:03.184098 1286388 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:45:03.184323 1286388 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1114 13:45:03.184639 1286388 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1114 13:45:03.184682 1286388 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:45:03.184735 1286388 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1114 13:45:03.184852 1286388 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:45:03.184899 1286388 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:45:03.185047 1286388 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	W1114 13:45:03.508662 1286388 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:45:03.508858 1286388 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.18.20"
	W1114 13:45:03.553738 1286388 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1114 13:45:03.553893 1286388 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns:1.6.7"
	W1114 13:45:03.563841 1286388 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:45:03.564007 1286388 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.18.20"
	W1114 13:45:03.569677 1286388 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:45:03.569811 1286388 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.18.20"
	W1114 13:45:03.578977 1286388 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1114 13:45:03.579155 1286388 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.4.3-0"
	I1114 13:45:03.579340 1286388 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.2"
	W1114 13:45:03.595828 1286388 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:45:03.596064 1286388 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.18.20"
	W1114 13:45:03.776090 1286388 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1114 13:45:03.776210 1286388 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I1114 13:45:03.853232 1286388 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1114 13:45:03.853304 1286388 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:45:03.853369 1286388 ssh_runner.go:195] Run: which crictl
	I1114 13:45:03.897752 1286388 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1114 13:45:03.897797 1286388 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1114 13:45:03.897866 1286388 ssh_runner.go:195] Run: which crictl
	I1114 13:45:04.106061 1286388 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1114 13:45:04.106147 1286388 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:45:04.106213 1286388 ssh_runner.go:195] Run: which crictl
	I1114 13:45:04.364315 1286388 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1114 13:45:04.364459 1286388 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:45:04.364556 1286388 ssh_runner.go:195] Run: which crictl
	I1114 13:45:04.368730 1286388 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1114 13:45:04.368863 1286388 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1114 13:45:04.368945 1286388 ssh_runner.go:195] Run: which crictl
	I1114 13:45:04.369030 1286388 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1114 13:45:04.369071 1286388 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1114 13:45:04.369116 1286388 ssh_runner.go:195] Run: which crictl
	I1114 13:45:04.395897 1286388 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1114 13:45:04.395990 1286388 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:45:04.396088 1286388 ssh_runner.go:195] Run: which crictl
	I1114 13:45:04.427821 1286388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:45:04.427896 1286388 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1114 13:45:04.427934 1286388 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:45:04.427976 1286388 ssh_runner.go:195] Run: which crictl
	I1114 13:45:04.428032 1286388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1114 13:45:04.428090 1286388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:45:04.428147 1286388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:45:04.428223 1286388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1114 13:45:04.428267 1286388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1114 13:45:04.428318 1286388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:45:04.629531 1286388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1114 13:45:04.629638 1286388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1114 13:45:04.629744 1286388 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:45:04.629847 1286388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1114 13:45:04.629906 1286388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1114 13:45:04.629949 1286388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1114 13:45:04.629988 1286388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1114 13:45:04.630029 1286388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1114 13:45:04.685040 1286388 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1114 13:45:04.685196 1286388 cache_images.go:92] LoadImages completed in 1.504620178s
	W1114 13:45:04.685285 1286388 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I1114 13:45:04.685383 1286388 ssh_runner.go:195] Run: sudo crictl info
	I1114 13:45:04.730689 1286388 cni.go:84] Creating CNI manager for ""
	I1114 13:45:04.730714 1286388 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1114 13:45:04.730769 1286388 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 13:45:04.730795 1286388 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-011886 NodeName:ingress-addon-legacy-011886 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/ce
rts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1114 13:45:04.730937 1286388 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "ingress-addon-legacy-011886"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 13:45:04.731045 1286388 kubeadm.go:976] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=ingress-addon-legacy-011886 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-011886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 13:45:04.731116 1286388 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1114 13:45:04.742510 1286388 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 13:45:04.742600 1286388 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 13:45:04.754088 1286388 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I1114 13:45:04.776232 1286388 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1114 13:45:04.798727 1286388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I1114 13:45:04.821657 1286388 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1114 13:45:04.826435 1286388 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:45:04.840928 1286388 certs.go:56] Setting up /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886 for IP: 192.168.49.2
	I1114 13:45:04.840964 1286388 certs.go:190] acquiring lock for shared ca certs: {Name:mk0ee92e20cab7092abbb9be784c32bf39215f61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:45:04.841142 1286388 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.key
	I1114 13:45:04.841192 1286388 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.key
	I1114 13:45:04.841243 1286388 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.key
	I1114 13:45:04.841260 1286388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt with IP's: []
	I1114 13:45:05.074459 1286388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt ...
	I1114 13:45:05.074504 1286388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: {Name:mk3321052d4ce51d320197d943aed0204c1c4fb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:45:05.074732 1286388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.key ...
	I1114 13:45:05.074748 1286388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.key: {Name:mkc0e2635ce4a22e9d54c857db3623b5f370a157 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:45:05.074833 1286388 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.key.dd3b5fb2
	I1114 13:45:05.074847 1286388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 13:45:06.859957 1286388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.crt.dd3b5fb2 ...
	I1114 13:45:06.859990 1286388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.crt.dd3b5fb2: {Name:mk2ef723fec39f47ee3ee1a30d35d4b278c69762 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:45:06.860270 1286388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.key.dd3b5fb2 ...
	I1114 13:45:06.860301 1286388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.key.dd3b5fb2: {Name:mk11ec5bea88f4d8e062d7f557ff50cc531b5bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:45:06.860418 1286388 certs.go:337] copying /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.crt
	I1114 13:45:06.860504 1286388 certs.go:341] copying /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.key
	I1114 13:45:06.860572 1286388 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/proxy-client.key
	I1114 13:45:06.860591 1286388 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/proxy-client.crt with IP's: []
	I1114 13:45:08.453823 1286388 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/proxy-client.crt ...
	I1114 13:45:08.453857 1286388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/proxy-client.crt: {Name:mk1214c98448bb6720e1144b70896adf932ee8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:45:08.454039 1286388 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/proxy-client.key ...
	I1114 13:45:08.454051 1286388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/proxy-client.key: {Name:mk2c66776b43167c6f26eff2f63a8009060074e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:45:08.454121 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1114 13:45:08.454140 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1114 13:45:08.454153 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1114 13:45:08.454164 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1114 13:45:08.454175 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 13:45:08.454187 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 13:45:08.454197 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 13:45:08.454207 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 13:45:08.454260 1286388 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/1251905.pem (1338 bytes)
	W1114 13:45:08.454300 1286388 certs.go:433] ignoring /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/1251905_empty.pem, impossibly tiny 0 bytes
	I1114 13:45:08.454311 1286388 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 13:45:08.454336 1286388 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/ca.pem (1078 bytes)
	I1114 13:45:08.454360 1286388 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/cert.pem (1123 bytes)
	I1114 13:45:08.454386 1286388 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/home/jenkins/minikube-integration/17581-1246551/.minikube/certs/key.pem (1679 bytes)
	I1114 13:45:08.454454 1286388 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1246551/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17581-1246551/.minikube/files/etc/ssl/certs/12519052.pem (1708 bytes)
	I1114 13:45:08.454480 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/files/etc/ssl/certs/12519052.pem -> /usr/share/ca-certificates/12519052.pem
	I1114 13:45:08.454492 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:45:08.454502 1286388 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/1251905.pem -> /usr/share/ca-certificates/1251905.pem
	I1114 13:45:08.455077 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 13:45:08.490186 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 13:45:08.521614 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 13:45:08.553662 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 13:45:08.584744 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 13:45:08.616373 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 13:45:08.647164 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 13:45:08.677986 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1114 13:45:08.707925 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/files/etc/ssl/certs/12519052.pem --> /usr/share/ca-certificates/12519052.pem (1708 bytes)
	I1114 13:45:08.738471 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 13:45:08.768780 1286388 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1246551/.minikube/certs/1251905.pem --> /usr/share/ca-certificates/1251905.pem (1338 bytes)
	I1114 13:45:08.799281 1286388 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 13:45:08.821444 1286388 ssh_runner.go:195] Run: openssl version
	I1114 13:45:08.829466 1286388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1251905.pem && ln -fs /usr/share/ca-certificates/1251905.pem /etc/ssl/certs/1251905.pem"
	I1114 13:45:08.842232 1286388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1251905.pem
	I1114 13:45:08.847260 1286388 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 13:41 /usr/share/ca-certificates/1251905.pem
	I1114 13:45:08.847377 1286388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1251905.pem
	I1114 13:45:08.856356 1286388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1251905.pem /etc/ssl/certs/51391683.0"
	I1114 13:45:08.868648 1286388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12519052.pem && ln -fs /usr/share/ca-certificates/12519052.pem /etc/ssl/certs/12519052.pem"
	I1114 13:45:08.881126 1286388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12519052.pem
	I1114 13:45:08.885986 1286388 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 13:41 /usr/share/ca-certificates/12519052.pem
	I1114 13:45:08.886091 1286388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12519052.pem
	I1114 13:45:08.895169 1286388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/12519052.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 13:45:08.907336 1286388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 13:45:08.919840 1286388 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:45:08.925141 1286388 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 13:35 /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:45:08.925277 1286388 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:45:08.934243 1286388 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 13:45:08.946672 1286388 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 13:45:08.951625 1286388 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 13:45:08.951732 1286388 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-011886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-011886 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:45:08.951825 1286388 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1114 13:45:08.951889 1286388 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 13:45:08.996933 1286388 cri.go:89] found id: ""
	I1114 13:45:08.997050 1286388 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 13:45:09.009714 1286388 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 13:45:09.022185 1286388 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1114 13:45:09.022299 1286388 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 13:45:09.034283 1286388 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 13:45:09.034366 1286388 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1114 13:45:09.094897 1286388 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1114 13:45:09.095345 1286388 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 13:45:09.149973 1286388 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1114 13:45:09.150122 1286388 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1114 13:45:09.150215 1286388 kubeadm.go:322] OS: Linux
	I1114 13:45:09.150304 1286388 kubeadm.go:322] CGROUPS_CPU: enabled
	I1114 13:45:09.150401 1286388 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1114 13:45:09.150482 1286388 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1114 13:45:09.150560 1286388 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1114 13:45:09.150638 1286388 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1114 13:45:09.150717 1286388 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1114 13:45:09.249206 1286388 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 13:45:09.249396 1286388 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 13:45:09.249536 1286388 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 13:45:09.503466 1286388 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 13:45:09.503592 1286388 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 13:45:09.503668 1286388 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 13:45:09.620911 1286388 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 13:45:09.623027 1286388 out.go:204]   - Generating certificates and keys ...
	I1114 13:45:09.623131 1286388 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 13:45:09.623207 1286388 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 13:45:09.879545 1286388 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 13:45:10.547076 1286388 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 13:45:11.206732 1286388 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 13:45:11.436156 1286388 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 13:45:11.878416 1286388 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 13:45:11.878739 1286388 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-011886 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1114 13:45:12.589168 1286388 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 13:45:12.589631 1286388 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-011886 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1114 13:45:12.916515 1286388 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 13:45:13.190419 1286388 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 13:45:14.114801 1286388 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 13:45:14.115278 1286388 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 13:45:14.504217 1286388 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 13:45:14.804911 1286388 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 13:45:15.233333 1286388 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 13:45:15.986513 1286388 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 13:45:15.987650 1286388 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 13:45:15.989929 1286388 out.go:204]   - Booting up control plane ...
	I1114 13:45:15.990075 1286388 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 13:45:16.009294 1286388 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 13:45:16.011240 1286388 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 13:45:16.012615 1286388 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 13:45:16.022574 1286388 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 13:45:27.525793 1286388 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502945 seconds
	I1114 13:45:27.525946 1286388 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 13:45:27.541891 1286388 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 13:45:28.065978 1286388 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 13:45:28.066182 1286388 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-011886 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1114 13:45:28.577191 1286388 kubeadm.go:322] [bootstrap-token] Using token: lnrla9.gzdlv105y8aly2fi
	I1114 13:45:28.579962 1286388 out.go:204]   - Configuring RBAC rules ...
	I1114 13:45:28.580100 1286388 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 13:45:28.586537 1286388 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 13:45:28.600548 1286388 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 13:45:28.604027 1286388 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 13:45:28.607341 1286388 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 13:45:28.610368 1286388 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 13:45:28.620647 1286388 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 13:45:28.923001 1286388 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 13:45:29.016016 1286388 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 13:45:29.018162 1286388 kubeadm.go:322] 
	I1114 13:45:29.018234 1286388 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 13:45:29.018248 1286388 kubeadm.go:322] 
	I1114 13:45:29.018321 1286388 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 13:45:29.018331 1286388 kubeadm.go:322] 
	I1114 13:45:29.018356 1286388 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 13:45:29.018413 1286388 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 13:45:29.018465 1286388 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 13:45:29.018476 1286388 kubeadm.go:322] 
	I1114 13:45:29.018526 1286388 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 13:45:29.018600 1286388 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 13:45:29.018670 1286388 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 13:45:29.018677 1286388 kubeadm.go:322] 
	I1114 13:45:29.018755 1286388 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 13:45:29.018829 1286388 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 13:45:29.018840 1286388 kubeadm.go:322] 
	I1114 13:45:29.018918 1286388 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lnrla9.gzdlv105y8aly2fi \
	I1114 13:45:29.019021 1286388 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:94fcb55293605b3288e68ff2d845228e62826801cfd59b170f6499414c73b553 \
	I1114 13:45:29.019052 1286388 kubeadm.go:322]     --control-plane 
	I1114 13:45:29.019060 1286388 kubeadm.go:322] 
	I1114 13:45:29.019139 1286388 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 13:45:29.019148 1286388 kubeadm.go:322] 
	I1114 13:45:29.019225 1286388 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lnrla9.gzdlv105y8aly2fi \
	I1114 13:45:29.019326 1286388 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:94fcb55293605b3288e68ff2d845228e62826801cfd59b170f6499414c73b553 
	I1114 13:45:29.031407 1286388 kubeadm.go:322] W1114 13:45:09.093923    1107 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1114 13:45:29.031622 1286388 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1114 13:45:29.031727 1286388 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 13:45:29.031848 1286388 kubeadm.go:322] W1114 13:45:16.009374    1107 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1114 13:45:29.031982 1286388 kubeadm.go:322] W1114 13:45:16.010973    1107 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1114 13:45:29.032000 1286388 cni.go:84] Creating CNI manager for ""
	I1114 13:45:29.032008 1286388 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1114 13:45:29.033993 1286388 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1114 13:45:29.035655 1286388 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 13:45:29.041592 1286388 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1114 13:45:29.041615 1286388 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 13:45:29.066378 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 13:45:29.514743 1286388 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 13:45:29.514891 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:29.514970 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b minikube.k8s.io/name=ingress-addon-legacy-011886 minikube.k8s.io/updated_at=2023_11_14T13_45_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:29.536325 1286388 ops.go:34] apiserver oom_adj: -16
	I1114 13:45:29.695072 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:29.804403 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:30.405406 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:30.904855 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:31.405272 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:31.904902 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:32.404828 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:32.905488 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:33.404919 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:33.905553 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:34.404870 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:34.904858 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:35.404862 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:35.904869 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:36.405639 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:36.905228 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:37.404755 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:37.905614 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:38.405290 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:38.905718 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:39.405297 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:39.905044 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:40.405597 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:40.905024 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:41.405723 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:41.905029 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:42.404968 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:42.905201 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:43.405534 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:43.904910 1286388 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:45:44.091990 1286388 kubeadm.go:1081] duration metric: took 14.577157181s to wait for elevateKubeSystemPrivileges.
	I1114 13:45:44.092029 1286388 kubeadm.go:406] StartCluster complete in 35.140301556s
	I1114 13:45:44.092046 1286388 settings.go:142] acquiring lock: {Name:mk455c6657f7b4efcfce9307d68afe3ebcb2d6b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:45:44.092110 1286388 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17581-1246551/kubeconfig
	I1114 13:45:44.092856 1286388 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1246551/kubeconfig: {Name:mk184f3168528a648dd99c6da0ef538261acbd95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:45:44.093630 1286388 kapi.go:59] client config for ingress-addon-legacy-011886: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 13:45:44.095003 1286388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 13:45:44.095267 1286388 config.go:182] Loaded profile config "ingress-addon-legacy-011886": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.20
	I1114 13:45:44.095309 1286388 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 13:45:44.095371 1286388 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-011886"
	I1114 13:45:44.095385 1286388 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-011886"
	I1114 13:45:44.095424 1286388 host.go:66] Checking if "ingress-addon-legacy-011886" exists ...
	I1114 13:45:44.095870 1286388 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-011886 --format={{.State.Status}}
	I1114 13:45:44.096552 1286388 cert_rotation.go:137] Starting client certificate rotation controller
	I1114 13:45:44.096622 1286388 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-011886"
	I1114 13:45:44.096671 1286388 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-011886"
	I1114 13:45:44.097018 1286388 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-011886 --format={{.State.Status}}
	I1114 13:45:44.156919 1286388 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:45:44.158847 1286388 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 13:45:44.158870 1286388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 13:45:44.158939 1286388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-011886
	I1114 13:45:44.173025 1286388 kapi.go:59] client config for ingress-addon-legacy-011886: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 13:45:44.173294 1286388 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-011886"
	I1114 13:45:44.173331 1286388 host.go:66] Checking if "ingress-addon-legacy-011886" exists ...
	I1114 13:45:44.173879 1286388 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-011886 --format={{.State.Status}}
	I1114 13:45:44.190228 1286388 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-011886" context rescaled to 1 replicas
	I1114 13:45:44.190269 1286388 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1114 13:45:44.195640 1286388 out.go:177] * Verifying Kubernetes components...
	I1114 13:45:44.197451 1286388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:45:44.219972 1286388 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 13:45:44.222944 1286388 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 13:45:44.223034 1286388 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-011886
	I1114 13:45:44.228998 1286388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34352 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/ingress-addon-legacy-011886/id_rsa Username:docker}
	I1114 13:45:44.256294 1286388 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34352 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/ingress-addon-legacy-011886/id_rsa Username:docker}
	I1114 13:45:44.442578 1286388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 13:45:44.443255 1286388 kapi.go:59] client config for ingress-addon-legacy-011886: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 13:45:44.443557 1286388 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-011886" to be "Ready" ...
	I1114 13:45:44.447429 1286388 node_ready.go:49] node "ingress-addon-legacy-011886" has status "Ready":"True"
	I1114 13:45:44.447454 1286388 node_ready.go:38] duration metric: took 3.876638ms waiting for node "ingress-addon-legacy-011886" to be "Ready" ...
	I1114 13:45:44.447465 1286388 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 13:45:44.456661 1286388 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace to be "Ready" ...
	I1114 13:45:44.535441 1286388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 13:45:44.551242 1286388 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 13:45:45.103444 1286388 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1114 13:45:45.396022 1286388 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1114 13:45:45.403575 1286388 addons.go:502] enable addons completed in 1.308252738s: enabled=[storage-provisioner default-storageclass]
	I1114 13:45:46.475004 1286388 pod_ready.go:102] pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace has status "Ready":"False"
	I1114 13:45:48.968738 1286388 pod_ready.go:102] pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace has status "Ready":"False"
	I1114 13:45:51.469207 1286388 pod_ready.go:102] pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace has status "Ready":"False"
	I1114 13:45:53.968328 1286388 pod_ready.go:102] pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace has status "Ready":"False"
	I1114 13:45:56.467940 1286388 pod_ready.go:102] pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace has status "Ready":"False"
	I1114 13:45:58.469055 1286388 pod_ready.go:102] pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace has status "Ready":"False"
	I1114 13:46:00.470664 1286388 pod_ready.go:102] pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace has status "Ready":"False"
	I1114 13:46:02.968733 1286388 pod_ready.go:102] pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace has status "Ready":"False"
	I1114 13:46:04.969055 1286388 pod_ready.go:102] pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace has status "Ready":"False"
	I1114 13:46:06.969038 1286388 pod_ready.go:92] pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace has status "Ready":"True"
	I1114 13:46:06.969067 1286388 pod_ready.go:81] duration metric: took 22.512369979s waiting for pod "coredns-66bff467f8-zlc4f" in "kube-system" namespace to be "Ready" ...
	I1114 13:46:06.969079 1286388 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-011886" in "kube-system" namespace to be "Ready" ...
	I1114 13:46:06.974322 1286388 pod_ready.go:92] pod "etcd-ingress-addon-legacy-011886" in "kube-system" namespace has status "Ready":"True"
	I1114 13:46:06.974349 1286388 pod_ready.go:81] duration metric: took 5.262642ms waiting for pod "etcd-ingress-addon-legacy-011886" in "kube-system" namespace to be "Ready" ...
	I1114 13:46:06.974368 1286388 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-011886" in "kube-system" namespace to be "Ready" ...
	I1114 13:46:06.979599 1286388 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-011886" in "kube-system" namespace has status "Ready":"True"
	I1114 13:46:06.979627 1286388 pod_ready.go:81] duration metric: took 5.251475ms waiting for pod "kube-apiserver-ingress-addon-legacy-011886" in "kube-system" namespace to be "Ready" ...
	I1114 13:46:06.979645 1286388 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-011886" in "kube-system" namespace to be "Ready" ...
	I1114 13:46:06.984749 1286388 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-011886" in "kube-system" namespace has status "Ready":"True"
	I1114 13:46:06.984777 1286388 pod_ready.go:81] duration metric: took 5.124115ms waiting for pod "kube-controller-manager-ingress-addon-legacy-011886" in "kube-system" namespace to be "Ready" ...
	I1114 13:46:06.984824 1286388 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-77ndl" in "kube-system" namespace to be "Ready" ...
	I1114 13:46:06.990623 1286388 pod_ready.go:92] pod "kube-proxy-77ndl" in "kube-system" namespace has status "Ready":"True"
	I1114 13:46:06.990648 1286388 pod_ready.go:81] duration metric: took 5.815469ms waiting for pod "kube-proxy-77ndl" in "kube-system" namespace to be "Ready" ...
	I1114 13:46:06.990660 1286388 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-011886" in "kube-system" namespace to be "Ready" ...
	I1114 13:46:07.164063 1286388 request.go:629] Waited for 173.288282ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-011886
	I1114 13:46:07.364225 1286388 request.go:629] Waited for 197.330338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-011886
	I1114 13:46:07.367301 1286388 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-011886" in "kube-system" namespace has status "Ready":"True"
	I1114 13:46:07.367329 1286388 pod_ready.go:81] duration metric: took 376.661566ms waiting for pod "kube-scheduler-ingress-addon-legacy-011886" in "kube-system" namespace to be "Ready" ...
	I1114 13:46:07.367343 1286388 pod_ready.go:38] duration metric: took 22.919864886s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 13:46:07.367379 1286388 api_server.go:52] waiting for apiserver process to appear ...
	I1114 13:46:07.367460 1286388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 13:46:07.381030 1286388 api_server.go:72] duration metric: took 23.190729418s to wait for apiserver process to appear ...
	I1114 13:46:07.381106 1286388 api_server.go:88] waiting for apiserver healthz status ...
	I1114 13:46:07.381142 1286388 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1114 13:46:07.390314 1286388 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1114 13:46:07.391307 1286388 api_server.go:141] control plane version: v1.18.20
	I1114 13:46:07.391332 1286388 api_server.go:131] duration metric: took 10.204431ms to wait for apiserver health ...
	I1114 13:46:07.391342 1286388 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 13:46:07.563698 1286388 request.go:629] Waited for 172.29398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1114 13:46:07.569889 1286388 system_pods.go:59] 8 kube-system pods found
	I1114 13:46:07.569990 1286388 system_pods.go:61] "coredns-66bff467f8-zlc4f" [95ead850-d85a-4a68-a6f8-18d530b85d3b] Running
	I1114 13:46:07.570012 1286388 system_pods.go:61] "etcd-ingress-addon-legacy-011886" [c0692f88-4465-4ace-a6ca-8128044cccf9] Running
	I1114 13:46:07.570032 1286388 system_pods.go:61] "kindnet-srljf" [fdceaf22-08b9-47a5-bed6-40cc149ea2f4] Running
	I1114 13:46:07.570047 1286388 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-011886" [d548dcdb-c698-499c-baa2-a41d499a678b] Running
	I1114 13:46:07.570059 1286388 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-011886" [863ed12a-6b44-4ec6-a888-768d59bd0eb9] Running
	I1114 13:46:07.570065 1286388 system_pods.go:61] "kube-proxy-77ndl" [edde8b53-ed2d-4307-8e91-1aaf3b350f2d] Running
	I1114 13:46:07.570070 1286388 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-011886" [4f55266f-19ef-497a-8642-8f36cf2ecf2d] Running
	I1114 13:46:07.570075 1286388 system_pods.go:61] "storage-provisioner" [a68e9945-999c-4744-937b-4265435ae8a3] Running
	I1114 13:46:07.570081 1286388 system_pods.go:74] duration metric: took 178.733086ms to wait for pod list to return data ...
	I1114 13:46:07.570091 1286388 default_sa.go:34] waiting for default service account to be created ...
	I1114 13:46:07.763451 1286388 request.go:629] Waited for 193.275626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1114 13:46:07.766694 1286388 default_sa.go:45] found service account: "default"
	I1114 13:46:07.766722 1286388 default_sa.go:55] duration metric: took 196.621376ms for default service account to be created ...
	I1114 13:46:07.766733 1286388 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 13:46:07.964190 1286388 request.go:629] Waited for 197.367999ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1114 13:46:07.970548 1286388 system_pods.go:86] 8 kube-system pods found
	I1114 13:46:07.970637 1286388 system_pods.go:89] "coredns-66bff467f8-zlc4f" [95ead850-d85a-4a68-a6f8-18d530b85d3b] Running
	I1114 13:46:07.970656 1286388 system_pods.go:89] "etcd-ingress-addon-legacy-011886" [c0692f88-4465-4ace-a6ca-8128044cccf9] Running
	I1114 13:46:07.970663 1286388 system_pods.go:89] "kindnet-srljf" [fdceaf22-08b9-47a5-bed6-40cc149ea2f4] Running
	I1114 13:46:07.970668 1286388 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-011886" [d548dcdb-c698-499c-baa2-a41d499a678b] Running
	I1114 13:46:07.970678 1286388 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-011886" [863ed12a-6b44-4ec6-a888-768d59bd0eb9] Running
	I1114 13:46:07.970684 1286388 system_pods.go:89] "kube-proxy-77ndl" [edde8b53-ed2d-4307-8e91-1aaf3b350f2d] Running
	I1114 13:46:07.970691 1286388 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-011886" [4f55266f-19ef-497a-8642-8f36cf2ecf2d] Running
	I1114 13:46:07.970697 1286388 system_pods.go:89] "storage-provisioner" [a68e9945-999c-4744-937b-4265435ae8a3] Running
	I1114 13:46:07.970705 1286388 system_pods.go:126] duration metric: took 203.967547ms to wait for k8s-apps to be running ...
	I1114 13:46:07.970714 1286388 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 13:46:07.970778 1286388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:46:07.985196 1286388 system_svc.go:56] duration metric: took 14.471344ms WaitForService to wait for kubelet.
	I1114 13:46:07.985222 1286388 kubeadm.go:581] duration metric: took 23.794929499s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 13:46:07.985252 1286388 node_conditions.go:102] verifying NodePressure condition ...
	I1114 13:46:08.163536 1286388 request.go:629] Waited for 178.21568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1114 13:46:08.166460 1286388 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1114 13:46:08.166493 1286388 node_conditions.go:123] node cpu capacity is 2
	I1114 13:46:08.166504 1286388 node_conditions.go:105] duration metric: took 181.246717ms to run NodePressure ...
	I1114 13:46:08.166517 1286388 start.go:228] waiting for startup goroutines ...
	I1114 13:46:08.166523 1286388 start.go:233] waiting for cluster config update ...
	I1114 13:46:08.166536 1286388 start.go:242] writing updated cluster config ...
	I1114 13:46:08.166823 1286388 ssh_runner.go:195] Run: rm -f paused
	I1114 13:46:08.229322 1286388 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1114 13:46:08.231535 1286388 out.go:177] 
	W1114 13:46:08.233294 1286388 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1114 13:46:08.235033 1286388 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1114 13:46:08.236870 1286388 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-011886" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c7e0e04c23d87       dd1b12fcb6097       12 seconds ago       Exited              hello-world-app           2                   2a9e732987b6b       hello-world-app-5f5d8b66bb-56p6w
	b1ebbf8e6bf14       aae348c9fbd40       35 seconds ago       Running             nginx                     0                   d467c67339c61       nginx
	c1e49dff4d8ef       d7f0cba3aa5bf       55 seconds ago       Exited              controller                0                   970744cd243d7       ingress-nginx-controller-7fcf777cb7-7mjcn
	c9a305818ee47       a883f7fc35610       About a minute ago   Exited              patch                     0                   3963aef1b363f       ingress-nginx-admission-patch-2nqkf
	64ee9df1519d9       a883f7fc35610       About a minute ago   Exited              create                    0                   b002e62a8cf48       ingress-nginx-admission-create-q8xtz
	e75db6096f97a       6e17ba78cf3eb       About a minute ago   Running             coredns                   0                   3279e554a00dc       coredns-66bff467f8-zlc4f
	100e53905b92f       ba04bb24b9575       About a minute ago   Running             storage-provisioner       0                   630a63819c741       storage-provisioner
	6f26b9ef7d575       04b4eaa3d3db8       About a minute ago   Running             kindnet-cni               0                   ed9444d4c31f9       kindnet-srljf
	48eed6e2ee0ba       565297bc6f7d4       About a minute ago   Running             kube-proxy                0                   a81ad09a58cb4       kube-proxy-77ndl
	c38faf3ac7560       095f37015706d       About a minute ago   Running             kube-scheduler            0                   74bc41ccbac91       kube-scheduler-ingress-addon-legacy-011886
	9c252381bdc48       2694cf044d665       About a minute ago   Running             kube-apiserver            0                   80ac1e81c4569       kube-apiserver-ingress-addon-legacy-011886
	99ee459c0dbb8       ab707b0a0ea33       About a minute ago   Running             etcd                      0                   af35c5a21ebc5       etcd-ingress-addon-legacy-011886
	7b238ab85afa8       68a4fac29a865       About a minute ago   Running             kube-controller-manager   0                   fd716f7bf1d29       kube-controller-manager-ingress-addon-legacy-011886
	
	* 
	* ==> containerd <==
	* Nov 14 13:47:00 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:00.684089152Z" level=info msg="RemoveContainer for \"9a3c371603936715bef48ad763b956cb622eaf5ec22b5ba738facf2798a4701d\" returns successfully"
	Nov 14 13:47:03 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:03.363474352Z" level=info msg="StopContainer for \"c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb\" with timeout 2 (s)"
	Nov 14 13:47:03 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:03.363930104Z" level=info msg="Stop container \"c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb\" with signal terminated"
	Nov 14 13:47:03 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:03.384044846Z" level=info msg="StopContainer for \"c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb\" with timeout 2 (s)"
	Nov 14 13:47:03 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:03.389537903Z" level=info msg="Skipping the sending of signal terminated to container \"c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb\" because a prior stop with timeout>0 request already sent the signal"
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.376726193Z" level=info msg="Kill container \"c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb\""
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.390410807Z" level=info msg="Kill container \"c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb\""
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.463694946Z" level=info msg="shim disconnected" id=c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.463764147Z" level=warning msg="cleaning up after shim disconnected" id=c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb namespace=k8s.io
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.463775470Z" level=info msg="cleaning up dead shim"
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.473912185Z" level=warning msg="cleanup warnings time=\"2023-11-14T13:47:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4656 runtime=io.containerd.runc.v2\n"
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.476720912Z" level=info msg="StopContainer for \"c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb\" returns successfully"
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.477416712Z" level=info msg="StopContainer for \"c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb\" returns successfully"
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.478045887Z" level=info msg="StopPodSandbox for \"970744cd243d7124192e268b38e0592d3426c70b8f3581de2eeeb8635a248183\""
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.478127446Z" level=info msg="Container to stop \"c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.478367117Z" level=info msg="StopPodSandbox for \"970744cd243d7124192e268b38e0592d3426c70b8f3581de2eeeb8635a248183\""
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.478405418Z" level=info msg="Container to stop \"c1e49dff4d8efe7a798676f4cf6046d19646021c7f0e94d95c421237307116fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.517059369Z" level=info msg="shim disconnected" id=970744cd243d7124192e268b38e0592d3426c70b8f3581de2eeeb8635a248183
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.517127651Z" level=warning msg="cleaning up after shim disconnected" id=970744cd243d7124192e268b38e0592d3426c70b8f3581de2eeeb8635a248183 namespace=k8s.io
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.517142437Z" level=info msg="cleaning up dead shim"
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.527663453Z" level=warning msg="cleanup warnings time=\"2023-11-14T13:47:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4693 runtime=io.containerd.runc.v2\n"
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.583466857Z" level=info msg="TearDown network for sandbox \"970744cd243d7124192e268b38e0592d3426c70b8f3581de2eeeb8635a248183\" successfully"
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.583644621Z" level=info msg="StopPodSandbox for \"970744cd243d7124192e268b38e0592d3426c70b8f3581de2eeeb8635a248183\" returns successfully"
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.589549295Z" level=info msg="TearDown network for sandbox \"970744cd243d7124192e268b38e0592d3426c70b8f3581de2eeeb8635a248183\" successfully"
	Nov 14 13:47:05 ingress-addon-legacy-011886 containerd[827]: time="2023-11-14T13:47:05.589698750Z" level=info msg="StopPodSandbox for \"970744cd243d7124192e268b38e0592d3426c70b8f3581de2eeeb8635a248183\" returns successfully"
	
	* 
	* ==> coredns [e75db6096f97ac6b03f3651a1f8f54148564699f08bdff0fde8460f8b3813495] <==
	* [INFO] 10.244.0.5:59075 - 16512 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000051069s
	[INFO] 10.244.0.5:35061 - 64557 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002378534s
	[INFO] 10.244.0.5:59075 - 5879 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001728231s
	[INFO] 10.244.0.5:35061 - 33290 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001455568s
	[INFO] 10.244.0.5:59075 - 46799 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001828653s
	[INFO] 10.244.0.5:59075 - 15002 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000122198s
	[INFO] 10.244.0.5:35061 - 26544 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000175376s
	[INFO] 10.244.0.5:46577 - 18691 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000079139s
	[INFO] 10.244.0.5:47730 - 58956 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048196s
	[INFO] 10.244.0.5:47730 - 21399 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000086105s
	[INFO] 10.244.0.5:46577 - 63604 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048697s
	[INFO] 10.244.0.5:46577 - 59523 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000060044s
	[INFO] 10.244.0.5:47730 - 35941 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051988s
	[INFO] 10.244.0.5:47730 - 35604 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046047s
	[INFO] 10.244.0.5:47730 - 43891 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042092s
	[INFO] 10.244.0.5:47730 - 28619 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000059068s
	[INFO] 10.244.0.5:46577 - 2850 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041476s
	[INFO] 10.244.0.5:47730 - 782 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001145727s
	[INFO] 10.244.0.5:46577 - 15075 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055467s
	[INFO] 10.244.0.5:46577 - 38591 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040205s
	[INFO] 10.244.0.5:47730 - 53123 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001322752s
	[INFO] 10.244.0.5:46577 - 42982 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000916337s
	[INFO] 10.244.0.5:47730 - 27513 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000049813s
	[INFO] 10.244.0.5:46577 - 34763 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000950323s
	[INFO] 10.244.0.5:46577 - 13119 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000042568s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-011886
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-011886
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b
	                    minikube.k8s.io/name=ingress-addon-legacy-011886
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T13_45_29_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 13:45:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-011886
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 13:47:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 13:47:02 +0000   Tue, 14 Nov 2023 13:45:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 13:47:02 +0000   Tue, 14 Nov 2023 13:45:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 13:47:02 +0000   Tue, 14 Nov 2023 13:45:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 13:47:02 +0000   Tue, 14 Nov 2023 13:45:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-011886
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 ba8f4b27c9ec452ea74e32e849ce1abf
	  System UUID:                4a4d494a-ef26-4ebb-abcd-a248eba1f61b
	  Boot ID:                    a87df0b0-e3c4-42f4-a7f5-31b7e72e6999
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.6.24
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-56p6w                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 coredns-66bff467f8-zlc4f                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     88s
	  kube-system                 etcd-ingress-addon-legacy-011886                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kindnet-srljf                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      88s
	  kube-system                 kube-apiserver-ingress-addon-legacy-011886             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-011886    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 kube-proxy-77ndl                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-ingress-addon-legacy-011886             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  113s (x5 over 113s)  kubelet     Node ingress-addon-legacy-011886 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x5 over 113s)  kubelet     Node ingress-addon-legacy-011886 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x4 over 113s)  kubelet     Node ingress-addon-legacy-011886 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node ingress-addon-legacy-011886 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node ingress-addon-legacy-011886 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node ingress-addon-legacy-011886 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet     Node ingress-addon-legacy-011886 status is now: NodeReady
	  Normal  Starting                 87s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.001226] FS-Cache: O-key=[8] '4e415c0100000000'
	[  +0.000741] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001034] FS-Cache: N-cookie d=00000000c53663eb{9p.inode} n=00000000293e98e9
	[  +0.001136] FS-Cache: N-key=[8] '4e415c0100000000'
	[  +0.017830] FS-Cache: Duplicate cookie detected
	[  +0.000762] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001029] FS-Cache: O-cookie d=00000000c53663eb{9p.inode} n=00000000dbc70b3e
	[  +0.001152] FS-Cache: O-key=[8] '4e415c0100000000'
	[  +0.000751] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000978] FS-Cache: N-cookie d=00000000c53663eb{9p.inode} n=00000000641c12fa
	[  +0.001129] FS-Cache: N-key=[8] '4e415c0100000000'
	[  +2.490709] FS-Cache: Duplicate cookie detected
	[  +0.000771] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001072] FS-Cache: O-cookie d=00000000c53663eb{9p.inode} n=0000000027133988
	[  +0.001120] FS-Cache: O-key=[8] '4d415c0100000000'
	[  +0.000748] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000973] FS-Cache: N-cookie d=00000000c53663eb{9p.inode} n=00000000293e98e9
	[  +0.001159] FS-Cache: N-key=[8] '4d415c0100000000'
	[  +0.396727] FS-Cache: Duplicate cookie detected
	[  +0.000769] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001007] FS-Cache: O-cookie d=00000000c53663eb{9p.inode} n=00000000710c6bd7
	[  +0.001153] FS-Cache: O-key=[8] '53415c0100000000'
	[  +0.000795] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000980] FS-Cache: N-cookie d=00000000c53663eb{9p.inode} n=00000000f59726d1
	[  +0.001121] FS-Cache: N-key=[8] '53415c0100000000'
	
	* 
	* ==> etcd [99ee459c0dbb81c6944ce3658c5492fb46c2455dc66b5ba4b70a6b519b57eaf2] <==
	* raft2023/11/14 13:45:19 INFO: aec36adc501070cc became follower at term 0
	raft2023/11/14 13:45:19 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/11/14 13:45:19 INFO: aec36adc501070cc became follower at term 1
	raft2023/11/14 13:45:19 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-14 13:45:19.821226 W | auth: simple token is not cryptographically signed
	2023-11-14 13:45:19.917435 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-14 13:45:20.117667 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-14 13:45:20.470342 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-14 13:45:20.573076 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-14 13:45:20.573350 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/11/14 13:45:20 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-14 13:45:20.573955 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/11/14 13:45:21 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/14 13:45:21 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/14 13:45:21 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/14 13:45:21 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/14 13:45:21 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-14 13:45:21.550849 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-14 13:45:21.567788 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-14 13:45:21.567871 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-14 13:45:21.567938 I | etcdserver: published {Name:ingress-addon-legacy-011886 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-14 13:45:21.568124 I | embed: ready to serve client requests
	2023-11-14 13:45:21.570652 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-14 13:45:21.570729 I | embed: ready to serve client requests
	2023-11-14 13:45:21.572344 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  13:47:11 up 10:29,  0 users,  load average: 1.45, 1.77, 1.65
	Linux ingress-addon-legacy-011886 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [6f26b9ef7d5750202ce074b6bb67854c6b0267a904fa3e9276c7c9854b5f8a36] <==
	* I1114 13:45:46.397664       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1114 13:45:46.397765       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1114 13:45:46.397927       1 main.go:116] setting mtu 1500 for CNI 
	I1114 13:45:46.397956       1 main.go:146] kindnetd IP family: "ipv4"
	I1114 13:45:46.397976       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1114 13:45:46.796423       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:45:46.796469       1 main.go:227] handling current node
	I1114 13:45:56.902793       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:45:56.902823       1 main.go:227] handling current node
	I1114 13:46:06.907184       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:46:06.907215       1 main.go:227] handling current node
	I1114 13:46:16.911451       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:46:16.911481       1 main.go:227] handling current node
	I1114 13:46:26.919199       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:46:26.919231       1 main.go:227] handling current node
	I1114 13:46:36.929788       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:46:36.929818       1 main.go:227] handling current node
	I1114 13:46:46.933436       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:46:46.933473       1 main.go:227] handling current node
	I1114 13:46:56.944397       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:46:56.944429       1 main.go:227] handling current node
	I1114 13:47:06.956200       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:47:06.956233       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [9c252381bdc48d80e7ce658b6bd8e9559dcb3d133a6e2a19a3f59b792c7c8d87] <==
	* I1114 13:45:25.545474       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E1114 13:45:25.566126       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1114 13:45:25.742603       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1114 13:45:25.742685       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1114 13:45:25.742715       1 cache.go:39] Caches are synced for autoregister controller
	I1114 13:45:25.742998       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1114 13:45:25.743030       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1114 13:45:26.534107       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1114 13:45:26.534144       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1114 13:45:26.544907       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1114 13:45:26.549931       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1114 13:45:26.549968       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1114 13:45:27.000731       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 13:45:27.066052       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1114 13:45:27.219666       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1114 13:45:27.220971       1 controller.go:609] quota admission added evaluator for: endpoints
	I1114 13:45:27.225464       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1114 13:45:27.983012       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1114 13:45:28.909317       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1114 13:45:28.993132       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1114 13:45:32.318601       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1114 13:45:43.531593       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1114 13:45:43.892657       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1114 13:46:09.199907       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1114 13:46:33.691770       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [7b238ab85afa87e7860dda5a8f16ea6f09757cf086f4c97fbf6b8ed2b56817d0] <==
	* I1114 13:45:43.586970       1 shared_informer.go:230] Caches are synced for service account 
	I1114 13:45:43.885423       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1114 13:45:43.896398       1 shared_informer.go:230] Caches are synced for endpoint 
	I1114 13:45:43.922780       1 shared_informer.go:230] Caches are synced for stateful set 
	I1114 13:45:43.924232       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"7a277631-8625-4a4a-a25d-cbbff624542f", APIVersion:"apps/v1", ResourceVersion:"230", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-srljf
	I1114 13:45:43.924436       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"ba35531e-6f02-4ea5-80d3-c49500cb10ee", APIVersion:"apps/v1", ResourceVersion:"217", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-77ndl
	I1114 13:45:43.943400       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	E1114 13:45:43.955865       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"ba35531e-6f02-4ea5-80d3-c49500cb10ee", ResourceVersion:"217", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63835566328, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000d0ade0), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x4000d0ae00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000d0ae20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000d95a80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x4000d0ae40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d0ae60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000d0aea0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40009a2690), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000bbc0b8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004d16c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40000ed558)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000bbc128)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1114 13:45:43.974927       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"7a277631-8625-4a4a-a25d-cbbff624542f", ResourceVersion:"230", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63835566329, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000d0af00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000d0af20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000d0af40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d0af60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d0af80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000d0afa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000d0afc0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000d0b000)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40009a2960), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000bbc4d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004d1730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40000ed560)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000bbc530)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1114 13:45:43.983516       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1114 13:45:43.983547       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E1114 13:45:44.002377       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"7a277631-8625-4a4a-a25d-cbbff624542f", ResourceVersion:"358", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63835566329, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001aba5c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001aba5e0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001aba600), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001aba620)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001aba640), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"",
UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001aba660), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*
v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001aba680), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStore
VolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.
CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001aba6a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*
v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001aba6c0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001aba700)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10
0m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1
.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001604780), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001ab6c98), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400026a4d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Tolera
tion{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000e090)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001ab6ce0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please a
pply your changes to the latest version and try again
	I1114 13:45:44.038584       1 shared_informer.go:230] Caches are synced for resource quota 
	I1114 13:45:44.042045       1 shared_informer.go:230] Caches are synced for resource quota 
	I1114 13:45:44.042527       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1114 13:45:44.192187       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"18b2f164-0389-4726-9f23-cce06d661048", APIVersion:"apps/v1", ResourceVersion:"367", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1114 13:45:44.335282       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"3be718ff-f920-49f7-b0f9-dcfee059acab", APIVersion:"apps/v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-dd76b
	I1114 13:46:09.172454       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"a12d6928-a4e0-4bd3-8634-7e99d7349af3", APIVersion:"apps/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1114 13:46:09.230217       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"2c58953c-8999-4ca4-9b8c-4bed5bb6ae8e", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-7mjcn
	I1114 13:46:09.234551       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1c7e8711-1be7-4e53-a7b2-7220a4f4a421", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-q8xtz
	I1114 13:46:09.281609       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e87b2626-a276-4f14-8de9-c0d2f82cac86", APIVersion:"batch/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-2nqkf
	I1114 13:46:11.538756       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"e87b2626-a276-4f14-8de9-c0d2f82cac86", APIVersion:"batch/v1", ResourceVersion:"500", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1114 13:46:11.570315       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"1c7e8711-1be7-4e53-a7b2-7220a4f4a421", APIVersion:"batch/v1", ResourceVersion:"492", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1114 13:46:42.556570       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"db010b62-58b0-441b-8a51-0a228cbd9592", APIVersion:"apps/v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1114 13:46:42.563383       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"3aa889c9-38d0-422c-a392-68377ed7c8bb", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-56p6w
	
	* 
	* ==> kube-proxy [48eed6e2ee0bab584d1debb2ee3ceb107453fbe711718e89346cf08ab568555b] <==
	* W1114 13:45:44.757115       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1114 13:45:44.776243       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1114 13:45:44.776352       1 server_others.go:186] Using iptables Proxier.
	I1114 13:45:44.779020       1 server.go:583] Version: v1.18.20
	I1114 13:45:44.797911       1 config.go:315] Starting service config controller
	I1114 13:45:44.797959       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1114 13:45:44.798009       1 config.go:133] Starting endpoints config controller
	I1114 13:45:44.798013       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1114 13:45:44.898152       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1114 13:45:44.898289       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [c38faf3ac756047f1abe2e1e83974e52995bf04c9d29fab07317af2897c59b83] <==
	* W1114 13:45:25.700939       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1114 13:45:25.704092       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1114 13:45:25.732619       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1114 13:45:25.732843       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1114 13:45:25.736042       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1114 13:45:25.736128       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 13:45:25.736144       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 13:45:25.736162       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1114 13:45:25.739160       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 13:45:25.739504       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 13:45:25.739887       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 13:45:25.740134       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 13:45:25.743174       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 13:45:25.743448       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 13:45:25.745568       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 13:45:25.745924       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 13:45:25.746275       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 13:45:25.747232       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 13:45:25.747671       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 13:45:25.748082       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 13:45:26.579859       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 13:45:26.638727       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 13:45:26.849879       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1114 13:45:27.336368       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1114 13:45:43.575593       1 factory.go:503] pod: kube-system/coredns-66bff467f8-dd76b is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Nov 14 13:46:46 ingress-addon-legacy-011886 kubelet[1652]: E1114 13:46:46.644998    1652 pod_workers.go:191] Error syncing pod 646d3a22-ad1b-4cfe-9199-7ee145fb1532 ("hello-world-app-5f5d8b66bb-56p6w_default(646d3a22-ad1b-4cfe-9199-7ee145fb1532)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-56p6w_default(646d3a22-ad1b-4cfe-9199-7ee145fb1532)"
	Nov 14 13:46:47 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:46:47.648487    1652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d2b7314b5775044fff089fe42172ca4eef2d48b9adf9076cfb83fce50751a430
	Nov 14 13:46:47 ingress-addon-legacy-011886 kubelet[1652]: E1114 13:46:47.648739    1652 pod_workers.go:191] Error syncing pod 646d3a22-ad1b-4cfe-9199-7ee145fb1532 ("hello-world-app-5f5d8b66bb-56p6w_default(646d3a22-ad1b-4cfe-9199-7ee145fb1532)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-56p6w_default(646d3a22-ad1b-4cfe-9199-7ee145fb1532)"
	Nov 14 13:46:52 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:46:52.394718    1652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9a3c371603936715bef48ad763b956cb622eaf5ec22b5ba738facf2798a4701d
	Nov 14 13:46:52 ingress-addon-legacy-011886 kubelet[1652]: E1114 13:46:52.395094    1652 pod_workers.go:191] Error syncing pod cdb71ba8-e37b-4367-b4be-86eba8f8de56 ("kube-ingress-dns-minikube_kube-system(cdb71ba8-e37b-4367-b4be-86eba8f8de56)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(cdb71ba8-e37b-4367-b4be-86eba8f8de56)"
	Nov 14 13:46:58 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:46:58.395296    1652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d2b7314b5775044fff089fe42172ca4eef2d48b9adf9076cfb83fce50751a430
	Nov 14 13:46:58 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:46:58.498238    1652 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-qfd5z" (UniqueName: "kubernetes.io/secret/cdb71ba8-e37b-4367-b4be-86eba8f8de56-minikube-ingress-dns-token-qfd5z") pod "cdb71ba8-e37b-4367-b4be-86eba8f8de56" (UID: "cdb71ba8-e37b-4367-b4be-86eba8f8de56")
	Nov 14 13:46:58 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:46:58.505982    1652 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdb71ba8-e37b-4367-b4be-86eba8f8de56-minikube-ingress-dns-token-qfd5z" (OuterVolumeSpecName: "minikube-ingress-dns-token-qfd5z") pod "cdb71ba8-e37b-4367-b4be-86eba8f8de56" (UID: "cdb71ba8-e37b-4367-b4be-86eba8f8de56"). InnerVolumeSpecName "minikube-ingress-dns-token-qfd5z". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 14 13:46:58 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:46:58.598801    1652 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-qfd5z" (UniqueName: "kubernetes.io/secret/cdb71ba8-e37b-4367-b4be-86eba8f8de56-minikube-ingress-dns-token-qfd5z") on node "ingress-addon-legacy-011886" DevicePath ""
	Nov 14 13:46:58 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:46:58.670028    1652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d2b7314b5775044fff089fe42172ca4eef2d48b9adf9076cfb83fce50751a430
	Nov 14 13:46:58 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:46:58.670407    1652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c7e0e04c23d870da3354f6c93f8cbd9c0a5c16e0fde7697e1f573ac0fca96fd5
	Nov 14 13:46:58 ingress-addon-legacy-011886 kubelet[1652]: E1114 13:46:58.670687    1652 pod_workers.go:191] Error syncing pod 646d3a22-ad1b-4cfe-9199-7ee145fb1532 ("hello-world-app-5f5d8b66bb-56p6w_default(646d3a22-ad1b-4cfe-9199-7ee145fb1532)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-56p6w_default(646d3a22-ad1b-4cfe-9199-7ee145fb1532)"
	Nov 14 13:47:00 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:47:00.677400    1652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9a3c371603936715bef48ad763b956cb622eaf5ec22b5ba738facf2798a4701d
	Nov 14 13:47:03 ingress-addon-legacy-011886 kubelet[1652]: E1114 13:47:03.369463    1652 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-7mjcn.1797815d99829728", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-7mjcn", UID:"42e67000-8ac5-4800-9478-2e844ae19564", APIVersion:"v1", ResourceVersion:"484", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-011886"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14cfc75d5a3f128, ext:94531606921, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14cfc75d5a3f128, ext:94531606921, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-7mjcn.1797815d99829728" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 14 13:47:03 ingress-addon-legacy-011886 kubelet[1652]: E1114 13:47:03.389048    1652 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-7mjcn.1797815d99829728", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-7mjcn", UID:"42e67000-8ac5-4800-9478-2e844ae19564", APIVersion:"v1", ResourceVersion:"484", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-011886"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc14cfc75d5a3f128, ext:94531606921, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc14cfc75d6dbcd79, ext:94552045018, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-7mjcn.1797815d99829728" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 14 13:47:05 ingress-addon-legacy-011886 kubelet[1652]: W1114 13:47:05.690322    1652 pod_container_deletor.go:77] Container "970744cd243d7124192e268b38e0592d3426c70b8f3581de2eeeb8635a248183" not found in pod's containers
	Nov 14 13:47:07 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:47:07.465798    1652 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-srdzl" (UniqueName: "kubernetes.io/secret/42e67000-8ac5-4800-9478-2e844ae19564-ingress-nginx-token-srdzl") pod "42e67000-8ac5-4800-9478-2e844ae19564" (UID: "42e67000-8ac5-4800-9478-2e844ae19564")
	Nov 14 13:47:07 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:47:07.465859    1652 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/42e67000-8ac5-4800-9478-2e844ae19564-webhook-cert") pod "42e67000-8ac5-4800-9478-2e844ae19564" (UID: "42e67000-8ac5-4800-9478-2e844ae19564")
	Nov 14 13:47:07 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:47:07.472441    1652 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e67000-8ac5-4800-9478-2e844ae19564-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "42e67000-8ac5-4800-9478-2e844ae19564" (UID: "42e67000-8ac5-4800-9478-2e844ae19564"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 14 13:47:07 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:47:07.473219    1652 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e67000-8ac5-4800-9478-2e844ae19564-ingress-nginx-token-srdzl" (OuterVolumeSpecName: "ingress-nginx-token-srdzl") pod "42e67000-8ac5-4800-9478-2e844ae19564" (UID: "42e67000-8ac5-4800-9478-2e844ae19564"). InnerVolumeSpecName "ingress-nginx-token-srdzl". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 14 13:47:07 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:47:07.566189    1652 reconciler.go:319] Volume detached for volume "ingress-nginx-token-srdzl" (UniqueName: "kubernetes.io/secret/42e67000-8ac5-4800-9478-2e844ae19564-ingress-nginx-token-srdzl") on node "ingress-addon-legacy-011886" DevicePath ""
	Nov 14 13:47:07 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:47:07.566246    1652 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/42e67000-8ac5-4800-9478-2e844ae19564-webhook-cert") on node "ingress-addon-legacy-011886" DevicePath ""
	Nov 14 13:47:08 ingress-addon-legacy-011886 kubelet[1652]: W1114 13:47:08.400994    1652 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/42e67000-8ac5-4800-9478-2e844ae19564/volumes" does not exist
	Nov 14 13:47:11 ingress-addon-legacy-011886 kubelet[1652]: I1114 13:47:11.394347    1652 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c7e0e04c23d870da3354f6c93f8cbd9c0a5c16e0fde7697e1f573ac0fca96fd5
	Nov 14 13:47:11 ingress-addon-legacy-011886 kubelet[1652]: E1114 13:47:11.394632    1652 pod_workers.go:191] Error syncing pod 646d3a22-ad1b-4cfe-9199-7ee145fb1532 ("hello-world-app-5f5d8b66bb-56p6w_default(646d3a22-ad1b-4cfe-9199-7ee145fb1532)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-56p6w_default(646d3a22-ad1b-4cfe-9199-7ee145fb1532)"
	
	* 
	* ==> storage-provisioner [100e53905b92f6f0185b6e997323c9fa9221ebbdbbcdc0b6ee5636bf1e5a3869] <==
	* I1114 13:45:47.551088       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 13:45:47.562946       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 13:45:47.563118       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 13:45:47.572071       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 13:45:47.572518       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-011886_4d5c8d68-816a-42b4-b29e-9a4dc3d2fad4!
	I1114 13:45:47.573651       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"60cdf435-7b00-464b-9892-43545a9f7d92", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-011886_4d5c8d68-816a-42b4-b29e-9a4dc3d2fad4 became leader
	I1114 13:45:47.673036       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-011886_4d5c8d68-816a-42b4-b29e-9a4dc3d2fad4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-011886 -n ingress-addon-legacy-011886
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-011886 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (54.96s)

                                                
                                    

Test pass (272/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.42
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.44
10 TestDownloadOnly/v1.28.3/json-events 10.05
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 13.39
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
19 TestBinaryMirror 0.64
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
25 TestAddons/Setup 127.55
27 TestAddons/parallel/Registry 15.07
29 TestAddons/parallel/InspektorGadget 10.91
30 TestAddons/parallel/MetricsServer 5.87
33 TestAddons/parallel/CSI 69.82
34 TestAddons/parallel/Headlamp 11.61
35 TestAddons/parallel/CloudSpanner 5.72
36 TestAddons/parallel/LocalPath 52.54
37 TestAddons/parallel/NvidiaDevicePlugin 5.82
40 TestAddons/serial/GCPAuth/Namespaces 0.2
41 TestAddons/StoppedEnableDisable 12.48
42 TestCertOptions 35.85
43 TestCertExpiration 232.06
45 TestForceSystemdFlag 43.51
46 TestForceSystemdEnv 46.03
47 TestDockerEnvContainerd 49.96
52 TestErrorSpam/setup 33.16
53 TestErrorSpam/start 0.92
54 TestErrorSpam/status 1.22
55 TestErrorSpam/pause 2
56 TestErrorSpam/unpause 2.11
57 TestErrorSpam/stop 1.51
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 63.04
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 6.13
64 TestFunctional/serial/KubeContext 0.07
65 TestFunctional/serial/KubectlGetPods 0.11
68 TestFunctional/serial/CacheCmd/cache/add_remote 4.1
69 TestFunctional/serial/CacheCmd/cache/add_local 1.54
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.48
74 TestFunctional/serial/CacheCmd/cache/delete 0.17
75 TestFunctional/serial/MinikubeKubectlCmd 0.17
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
77 TestFunctional/serial/ExtraConfig 43.89
78 TestFunctional/serial/ComponentHealth 0.11
79 TestFunctional/serial/LogsCmd 1.88
81 TestFunctional/serial/InvalidService 4.6
83 TestFunctional/parallel/ConfigCmd 0.65
84 TestFunctional/parallel/DashboardCmd 8.99
85 TestFunctional/parallel/DryRun 0.53
86 TestFunctional/parallel/InternationalLanguage 0.25
87 TestFunctional/parallel/StatusCmd 1.66
91 TestFunctional/parallel/ServiceCmdConnect 9.69
92 TestFunctional/parallel/AddonsCmd 0.21
93 TestFunctional/parallel/PersistentVolumeClaim 24.65
95 TestFunctional/parallel/SSHCmd 0.84
96 TestFunctional/parallel/CpCmd 1.7
98 TestFunctional/parallel/FileSync 0.42
99 TestFunctional/parallel/CertSync 2.59
103 TestFunctional/parallel/NodeLabels 0.13
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.97
107 TestFunctional/parallel/License 0.35
109 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.43
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
114 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
118 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
119 TestFunctional/parallel/ServiceCmd/DeployApp 6.26
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
121 TestFunctional/parallel/ProfileCmd/profile_list 0.44
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
123 TestFunctional/parallel/MountCmd/any-port 7.99
124 TestFunctional/parallel/ServiceCmd/List 0.66
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.61
127 TestFunctional/parallel/ServiceCmd/Format 0.51
128 TestFunctional/parallel/ServiceCmd/URL 0.49
129 TestFunctional/parallel/MountCmd/specific-port 1.75
130 TestFunctional/parallel/MountCmd/VerifyCleanup 2.89
131 TestFunctional/parallel/Version/short 0.11
132 TestFunctional/parallel/Version/components 0.88
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.39
137 TestFunctional/parallel/ImageCommands/ImageBuild 3.01
138 TestFunctional/parallel/ImageCommands/Setup 1.7
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.32
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.26
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.32
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.55
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
149 TestFunctional/delete_addon-resizer_images 0.09
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 94.15
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 8.57
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.68
162 TestJSONOutput/start/Command 83.36
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.85
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.79
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.84
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.26
187 TestKicCustomNetwork/create_custom_network 54.6
188 TestKicCustomNetwork/use_default_bridge_network 37.16
189 TestKicExistingNetwork 35.45
190 TestKicCustomSubnet 38.23
191 TestKicStaticIP 37.78
192 TestMainNoArgs 0.1
193 TestMinikubeProfile 74.16
196 TestMountStart/serial/StartWithMountFirst 9.23
197 TestMountStart/serial/VerifyMountFirst 0.29
198 TestMountStart/serial/StartWithMountSecond 7.06
199 TestMountStart/serial/VerifyMountSecond 0.3
200 TestMountStart/serial/DeleteFirst 1.7
201 TestMountStart/serial/VerifyMountPostDelete 0.32
202 TestMountStart/serial/Stop 1.23
203 TestMountStart/serial/RestartStopped 7.41
204 TestMountStart/serial/VerifyMountPostStop 0.31
207 TestMultiNode/serial/FreshStart2Nodes 108.37
208 TestMultiNode/serial/DeployApp2Nodes 4.88
209 TestMultiNode/serial/PingHostFrom2Pods 1.29
210 TestMultiNode/serial/AddNode 19.85
211 TestMultiNode/serial/ProfileList 0.38
212 TestMultiNode/serial/CopyFile 11.88
213 TestMultiNode/serial/StopNode 2.51
214 TestMultiNode/serial/StartAfterStop 12.49
215 TestMultiNode/serial/RestartKeepsNodes 121.13
216 TestMultiNode/serial/DeleteNode 5.27
217 TestMultiNode/serial/StopMultiNode 24.35
218 TestMultiNode/serial/RestartMultiNode 88.37
219 TestMultiNode/serial/ValidateNameConflict 37.69
224 TestPreload 176.13
226 TestScheduledStopUnix 107.45
229 TestInsufficientStorage 10.97
230 TestRunningBinaryUpgrade 89.29
232 TestKubernetesUpgrade 386.35
233 TestMissingContainerUpgrade 148.99
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
236 TestNoKubernetes/serial/StartWithK8s 41.09
237 TestNoKubernetes/serial/StartWithStopK8s 17.17
238 TestNoKubernetes/serial/Start 10.16
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.55
240 TestNoKubernetes/serial/ProfileList 6.62
241 TestNoKubernetes/serial/Stop 1.28
242 TestNoKubernetes/serial/StartNoArgs 7.24
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
244 TestStoppedBinaryUpgrade/Setup 1.46
245 TestStoppedBinaryUpgrade/Upgrade 116.71
246 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
255 TestPause/serial/Start 64.36
256 TestPause/serial/SecondStartNoReconfiguration 7.26
257 TestPause/serial/Pause 1.26
258 TestPause/serial/VerifyStatus 0.59
259 TestPause/serial/Unpause 1.08
260 TestPause/serial/PauseAgain 1.32
261 TestPause/serial/DeletePaused 3.1
262 TestPause/serial/VerifyDeletedResources 0.9
270 TestNetworkPlugins/group/false 6.4
275 TestStartStop/group/old-k8s-version/serial/FirstStart 127.37
276 TestStartStop/group/old-k8s-version/serial/DeployApp 8.54
277 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.13
278 TestStartStop/group/old-k8s-version/serial/Stop 12.5
279 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
280 TestStartStop/group/old-k8s-version/serial/SecondStart 667.21
282 TestStartStop/group/no-preload/serial/FirstStart 79.97
283 TestStartStop/group/no-preload/serial/DeployApp 8.49
284 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.25
285 TestStartStop/group/no-preload/serial/Stop 12.21
286 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
287 TestStartStop/group/no-preload/serial/SecondStart 337.29
288 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.03
289 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
290 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
291 TestStartStop/group/no-preload/serial/Pause 3.48
293 TestStartStop/group/embed-certs/serial/FirstStart 61.49
294 TestStartStop/group/embed-certs/serial/DeployApp 8.48
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.31
296 TestStartStop/group/embed-certs/serial/Stop 12.2
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
298 TestStartStop/group/embed-certs/serial/SecondStart 335.29
299 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
300 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
301 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
302 TestStartStop/group/old-k8s-version/serial/Pause 3.71
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 63.02
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.54
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.31
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.22
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.42
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 342.18
310 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.03
311 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
312 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.41
313 TestStartStop/group/embed-certs/serial/Pause 3.74
315 TestStartStop/group/newest-cni/serial/FirstStart 43.35
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.45
318 TestStartStop/group/newest-cni/serial/Stop 1.29
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
320 TestStartStop/group/newest-cni/serial/SecondStart 32.43
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.41
324 TestStartStop/group/newest-cni/serial/Pause 3.65
325 TestNetworkPlugins/group/auto/Start 60.81
326 TestNetworkPlugins/group/auto/KubeletFlags 0.38
327 TestNetworkPlugins/group/auto/NetCatPod 9.43
328 TestNetworkPlugins/group/auto/DNS 0.33
329 TestNetworkPlugins/group/auto/Localhost 0.26
330 TestNetworkPlugins/group/auto/HairPin 0.24
331 TestNetworkPlugins/group/kindnet/Start 94.26
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.04
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.18
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.55
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.41
336 TestNetworkPlugins/group/calico/Start 71.77
337 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
338 TestNetworkPlugins/group/kindnet/KubeletFlags 0.47
339 TestNetworkPlugins/group/kindnet/NetCatPod 10.49
340 TestNetworkPlugins/group/calico/ControllerPod 5.04
341 TestNetworkPlugins/group/kindnet/DNS 0.26
342 TestNetworkPlugins/group/kindnet/Localhost 0.17
343 TestNetworkPlugins/group/kindnet/HairPin 0.22
344 TestNetworkPlugins/group/calico/KubeletFlags 0.37
345 TestNetworkPlugins/group/calico/NetCatPod 10.42
346 TestNetworkPlugins/group/calico/DNS 0.3
347 TestNetworkPlugins/group/calico/Localhost 0.27
348 TestNetworkPlugins/group/calico/HairPin 0.29
349 TestNetworkPlugins/group/custom-flannel/Start 66.61
350 TestNetworkPlugins/group/enable-default-cni/Start 93.88
351 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
352 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.43
353 TestNetworkPlugins/group/custom-flannel/DNS 0.23
354 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
355 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
356 TestNetworkPlugins/group/flannel/Start 60.17
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.45
359 TestNetworkPlugins/group/enable-default-cni/DNS 0.32
360 TestNetworkPlugins/group/enable-default-cni/Localhost 0.3
361 TestNetworkPlugins/group/enable-default-cni/HairPin 0.27
362 TestNetworkPlugins/group/bridge/Start 89.16
363 TestNetworkPlugins/group/flannel/ControllerPod 5.03
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
365 TestNetworkPlugins/group/flannel/NetCatPod 10.39
366 TestNetworkPlugins/group/flannel/DNS 0.32
367 TestNetworkPlugins/group/flannel/Localhost 0.26
368 TestNetworkPlugins/group/flannel/HairPin 0.28
369 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
370 TestNetworkPlugins/group/bridge/NetCatPod 8.34
371 TestNetworkPlugins/group/bridge/DNS 0.22
372 TestNetworkPlugins/group/bridge/Localhost 0.18
373 TestNetworkPlugins/group/bridge/HairPin 0.19
x
+
TestDownloadOnly/v1.16.0/json-events (12.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-690510 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-690510 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.416163026s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-690510
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-690510: exit status 85 (442.403924ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-690510 | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |          |
	|         | -p download-only-690510        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:34:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:34:18.388726 1251910 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:34:18.389394 1251910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:18.389409 1251910 out.go:309] Setting ErrFile to fd 2...
	I1114 13:34:18.389415 1251910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:18.389734 1251910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
	W1114 13:34:18.389893 1251910 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17581-1246551/.minikube/config/config.json: open /home/jenkins/minikube-integration/17581-1246551/.minikube/config/config.json: no such file or directory
	I1114 13:34:18.390328 1251910 out.go:303] Setting JSON to true
	I1114 13:34:18.391215 1251910 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":37005,"bootTime":1699931854,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1114 13:34:18.391307 1251910 start.go:138] virtualization:  
	I1114 13:34:18.394041 1251910 out.go:97] [download-only-690510] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	W1114 13:34:18.394301 1251910 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball: no such file or directory
	I1114 13:34:18.396376 1251910 out.go:169] MINIKUBE_LOCATION=17581
	I1114 13:34:18.394433 1251910 notify.go:220] Checking for updates...
	I1114 13:34:18.399666 1251910 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:34:18.401251 1251910 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	I1114 13:34:18.403002 1251910 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	I1114 13:34:18.405019 1251910 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1114 13:34:18.408888 1251910 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1114 13:34:18.409168 1251910 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:34:18.432826 1251910 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:34:18.432909 1251910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:34:18.525082 1251910 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-14 13:34:18.514856674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:34:18.525212 1251910 docker.go:295] overlay module found
	I1114 13:34:18.527126 1251910 out.go:97] Using the docker driver based on user configuration
	I1114 13:34:18.527155 1251910 start.go:298] selected driver: docker
	I1114 13:34:18.527166 1251910 start.go:902] validating driver "docker" against <nil>
	I1114 13:34:18.527277 1251910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:34:18.604069 1251910 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-14 13:34:18.593990437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:34:18.604222 1251910 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 13:34:18.604507 1251910 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1114 13:34:18.604660 1251910 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1114 13:34:18.607011 1251910 out.go:169] Using Docker driver with root privileges
	I1114 13:34:18.608848 1251910 cni.go:84] Creating CNI manager for ""
	I1114 13:34:18.608871 1251910 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1114 13:34:18.608888 1251910 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1114 13:34:18.608906 1251910 start_flags.go:323] config:
	{Name:download-only-690510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-690510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:34:18.610826 1251910 out.go:97] Starting control plane node download-only-690510 in cluster download-only-690510
	I1114 13:34:18.610845 1251910 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1114 13:34:18.613086 1251910 out.go:97] Pulling base image ...
	I1114 13:34:18.613112 1251910 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1114 13:34:18.613268 1251910 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1114 13:34:18.631550 1251910 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1114 13:34:18.631578 1251910 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 to local cache
	I1114 13:34:18.631769 1251910 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory
	I1114 13:34:18.631868 1251910 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 to local cache
	I1114 13:34:18.678326 1251910 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1114 13:34:18.678358 1251910 cache.go:56] Caching tarball of preloaded images
	I1114 13:34:18.679123 1251910 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1114 13:34:18.681737 1251910 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1114 13:34:18.681768 1251910 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4 ...
	I1114 13:34:18.797309 1251910 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:1f1e2324dbd6e4f3d8734226d9194e9f -> /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-arm64.tar.lz4
	I1114 13:34:23.097630 1251910 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-690510"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (10.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-690510 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-690510 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.047622372s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (10.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-690510
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-690510: exit status 85 (86.957181ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-690510 | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |          |
	|         | -p download-only-690510        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-690510 | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |          |
	|         | -p download-only-690510        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:34:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:34:31.258016 1251988 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:34:31.258172 1251988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:31.258181 1251988 out.go:309] Setting ErrFile to fd 2...
	I1114 13:34:31.258188 1251988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:31.258461 1251988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
	W1114 13:34:31.258640 1251988 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17581-1246551/.minikube/config/config.json: open /home/jenkins/minikube-integration/17581-1246551/.minikube/config/config.json: no such file or directory
	I1114 13:34:31.258888 1251988 out.go:303] Setting JSON to true
	I1114 13:34:31.259767 1251988 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":37018,"bootTime":1699931854,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1114 13:34:31.259841 1251988 start.go:138] virtualization:  
	I1114 13:34:31.290952 1251988 out.go:97] [download-only-690510] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 13:34:31.291924 1251988 notify.go:220] Checking for updates...
	I1114 13:34:31.322788 1251988 out.go:169] MINIKUBE_LOCATION=17581
	I1114 13:34:31.355018 1251988 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:34:31.387066 1251988 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	I1114 13:34:31.419417 1251988 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	I1114 13:34:31.451599 1251988 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1114 13:34:31.515689 1251988 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1114 13:34:31.516354 1251988 config.go:182] Loaded profile config "download-only-690510": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1114 13:34:31.516463 1251988 start.go:810] api.Load failed for download-only-690510: filestore "download-only-690510": Docker machine "download-only-690510" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1114 13:34:31.516592 1251988 driver.go:378] Setting default libvirt URI to qemu:///system
	W1114 13:34:31.516625 1251988 start.go:810] api.Load failed for download-only-690510: filestore "download-only-690510": Docker machine "download-only-690510" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1114 13:34:31.541931 1251988 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:34:31.542020 1251988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:34:31.612844 1251988 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2023-11-14 13:34:31.602431585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:34:31.612960 1251988 docker.go:295] overlay module found
	I1114 13:34:31.643280 1251988 out.go:97] Using the docker driver based on existing profile
	I1114 13:34:31.643335 1251988 start.go:298] selected driver: docker
	I1114 13:34:31.643343 1251988 start.go:902] validating driver "docker" against &{Name:download-only-690510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-690510 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:34:31.643547 1251988 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:34:31.711809 1251988 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2023-11-14 13:34:31.702361653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:34:31.712290 1251988 cni.go:84] Creating CNI manager for ""
	I1114 13:34:31.712310 1251988 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1114 13:34:31.712324 1251988 start_flags.go:323] config:
	{Name:download-only-690510 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-690510 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInter
val:1m0s GPUs:}
	I1114 13:34:31.755675 1251988 out.go:97] Starting control plane node download-only-690510 in cluster download-only-690510
	I1114 13:34:31.755717 1251988 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1114 13:34:31.785321 1251988 out.go:97] Pulling base image ...
	I1114 13:34:31.785357 1251988 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1114 13:34:31.785595 1251988 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1114 13:34:31.803670 1251988 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1114 13:34:31.803697 1251988 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 to local cache
	I1114 13:34:31.803833 1251988 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory
	I1114 13:34:31.803857 1251988 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory, skipping pull
	I1114 13:34:31.803862 1251988 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in cache, skipping pull
	I1114 13:34:31.803876 1251988 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 as a tarball
	I1114 13:34:31.850029 1251988 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	I1114 13:34:31.850060 1251988 cache.go:56] Caching tarball of preloaded images
	I1114 13:34:31.850947 1251988 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime containerd
	I1114 13:34:31.883000 1251988 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1114 13:34:31.883067 1251988 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4 ...
	I1114 13:34:31.993175 1251988 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:bef3312f8cc1e9e2e6a78bd8b3d269c4 -> /home/jenkins/minikube-integration/17581-1246551/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-containerd-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-690510"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (13.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
aaa_download_only_test.go:190: (dbg) Done: out/minikube-linux-arm64 delete --all: (13.388845583s)
--- PASS: TestDownloadOnly/DeleteAll (13.39s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-690510
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-603085 --alsologtostderr --binary-mirror http://127.0.0.1:46489 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-603085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-603085
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-135796
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-135796: exit status 85 (97.588857ms)

                                                
                                                
-- stdout --
	* Profile "addons-135796" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-135796"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-135796
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-135796: exit status 85 (95.168932ms)

                                                
                                                
-- stdout --
	* Profile "addons-135796" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-135796"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (127.55s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-135796 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-135796 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m7.545584351s)
--- PASS: TestAddons/Setup (127.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 37.10348ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ntgrp" [a00a4394-1bb1-48dd-9264-07512de6c23b] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015239727s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rzmvh" [833f2957-ab82-4119-85d5-4aabb9f89026] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.018746036s
addons_test.go:339: (dbg) Run:  kubectl --context addons-135796 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-135796 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-135796 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.767826883s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.07s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lf5mx" [f1a1916c-9274-4a48-ac70-b9129857f0e5] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013159497s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-135796
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-135796: (5.89844903s)
--- PASS: TestAddons/parallel/InspektorGadget (10.91s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.87s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 3.989814ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-vq7v7" [f0795cc1-d627-4e83-99c7-ffd6af24b9b9] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012513038s
addons_test.go:414: (dbg) Run:  kubectl --context addons-135796 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 38.437187ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-135796 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/11/14 13:37:18 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-135796 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d12bd0dc-7c87-44ae-9b31-48e0d8f84bbb] Pending
helpers_test.go:344: "task-pv-pod" [d12bd0dc-7c87-44ae-9b31-48e0d8f84bbb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d12bd0dc-7c87-44ae-9b31-48e0d8f84bbb] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.025447687s
addons_test.go:583: (dbg) Run:  kubectl --context addons-135796 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-135796 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-135796 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-135796 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-135796 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-135796 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-135796 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-135796 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b2935a97-34f2-468e-9115-f21e3dcc0560] Pending
helpers_test.go:344: "task-pv-pod-restore" [b2935a97-34f2-468e-9115-f21e3dcc0560] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b2935a97-34f2-468e-9115-f21e3dcc0560] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.022577374s
addons_test.go:625: (dbg) Run:  kubectl --context addons-135796 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-135796 delete pod task-pv-pod-restore: (1.098576783s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-135796 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-135796 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-135796 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.861039005s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (69.82s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-135796 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-135796 --alsologtostderr -v=1: (1.56472768s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-mk5hh" [70dc15c8-c03b-4e9c-a9bf-146d0b8f21a7] Pending
helpers_test.go:344: "headlamp-777fd4b855-mk5hh" [70dc15c8-c03b-4e9c-a9bf-146d0b8f21a7] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-mk5hh" [70dc15c8-c03b-4e9c-a9bf-146d0b8f21a7] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.041831596s
--- PASS: TestAddons/parallel/Headlamp (11.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-dmnwf" [f79d8f20-e9c3-46ec-95a6-638614395238] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.018397112s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-135796
--- PASS: TestAddons/parallel/CloudSpanner (5.72s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.54s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-135796 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-135796 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-135796 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c5cb3fc2-bf32-45fb-b133-5199bded056c] Pending
helpers_test.go:344: "test-local-path" [c5cb3fc2-bf32-45fb-b133-5199bded056c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c5cb3fc2-bf32-45fb-b133-5199bded056c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c5cb3fc2-bf32-45fb-b133-5199bded056c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.022211997s
addons_test.go:890: (dbg) Run:  kubectl --context addons-135796 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 ssh "cat /opt/local-path-provisioner/pvc-70e8eceb-372f-4aa1-b268-d2d2a4471f68_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-135796 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-135796 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-135796 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-135796 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.619108415s)
--- PASS: TestAddons/parallel/LocalPath (52.54s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-drddv" [3aacd730-af31-478b-8629-70f475d2e57a] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.01954639s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-135796
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.82s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-135796 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-135796 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-135796
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-135796: (12.146708901s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-135796
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-135796
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-135796
--- PASS: TestAddons/StoppedEnableDisable (12.48s)

                                                
                                    
x
+
TestCertOptions (35.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-484409 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1114 14:15:07.043526 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-484409 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (32.961728603s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-484409 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-484409 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-484409 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-484409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-484409
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-484409: (2.106413178s)
--- PASS: TestCertOptions (35.85s)

                                                
                                    
x
+
TestCertExpiration (232.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-749476 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-749476 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.113133586s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-749476 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-749476 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.937667214s)
helpers_test.go:175: Cleaning up "cert-expiration-749476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-749476
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-749476: (3.008830028s)
--- PASS: TestCertExpiration (232.06s)

                                                
                                    
x
+
TestForceSystemdFlag (43.51s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-713691 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-713691 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.699044976s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-713691 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-713691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-713691
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-713691: (2.349951548s)
--- PASS: TestForceSystemdFlag (43.51s)

                                                
                                    
x
+
TestForceSystemdEnv (46.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-265934 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-265934 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (43.300669316s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-265934 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-265934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-265934
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-265934: (2.340044159s)
--- PASS: TestForceSystemdEnv (46.03s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.96s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-625884 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-625884 --driver=docker  --container-runtime=containerd: (33.25962153s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-625884"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-625884": (1.482665459s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lKKbGxlT8Qx4/agent.1270122" SSH_AGENT_PID="1270123" DOCKER_HOST=ssh://docker@127.0.0.1:34337 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lKKbGxlT8Qx4/agent.1270122" SSH_AGENT_PID="1270123" DOCKER_HOST=ssh://docker@127.0.0.1:34337 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lKKbGxlT8Qx4/agent.1270122" SSH_AGENT_PID="1270123" DOCKER_HOST=ssh://docker@127.0.0.1:34337 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.653372107s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lKKbGxlT8Qx4/agent.1270122" SSH_AGENT_PID="1270123" DOCKER_HOST=ssh://docker@127.0.0.1:34337 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-625884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-625884
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-625884: (2.070598158s)
--- PASS: TestDockerEnvContainerd (49.96s)

                                                
                                    
x
+
TestErrorSpam/setup (33.16s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-054055 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-054055 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-054055 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-054055 --driver=docker  --container-runtime=containerd: (33.160346256s)
--- PASS: TestErrorSpam/setup (33.16s)

                                                
                                    
x
+
TestErrorSpam/start (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 start --dry-run
--- PASS: TestErrorSpam/start (0.92s)

                                                
                                    
x
+
TestErrorSpam/status (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 status
--- PASS: TestErrorSpam/status (1.22s)

                                                
                                    
x
+
TestErrorSpam/pause (2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 pause
--- PASS: TestErrorSpam/pause (2.00s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.11s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 unpause
--- PASS: TestErrorSpam/unpause (2.11s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 stop: (1.265125854s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-054055 --log_dir /tmp/nospam-054055 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17581-1246551/.minikube/files/etc/test/nested/copy/1251905/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (63.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-927562 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1114 13:42:03.998069 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:42:04.004656 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:42:04.014867 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:42:04.035136 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:42:04.075423 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:42:04.155794 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:42:04.316184 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:42:04.636805 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:42:05.277036 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:42:06.557277 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:42:09.117656 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:42:14.238572 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-927562 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m3.035320785s)
--- PASS: TestFunctional/serial/StartWithProxy (63.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.13s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-927562 --alsologtostderr -v=8
E1114 13:42:24.478991 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-927562 --alsologtostderr -v=8: (6.132298174s)
functional_test.go:659: soft start took 6.132803722s for "functional-927562" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.13s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-927562 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-927562 cache add registry.k8s.io/pause:3.1: (1.368128457s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-927562 cache add registry.k8s.io/pause:3.3: (1.475525194s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-927562 cache add registry.k8s.io/pause:latest: (1.260325859s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-927562 /tmp/TestFunctionalserialCacheCmdcacheadd_local2257605493/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 cache add minikube-local-cache-test:functional-927562
functional_test.go:1085: (dbg) Done: out/minikube-linux-arm64 -p functional-927562 cache add minikube-local-cache-test:functional-927562: (1.039469198s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 cache delete minikube-local-cache-test:functional-927562
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-927562
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-927562 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (359.347358ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-927562 cache reload: (1.385413052s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 kubectl -- --context functional-927562 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-927562 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.89s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-927562 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1114 13:42:44.959231 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-927562 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.888445158s)
functional_test.go:757: restart took 43.888566282s for "functional-927562" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.89s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-927562 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-927562 logs: (1.884563392s)
--- PASS: TestFunctional/serial/LogsCmd (1.88s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-927562 apply -f testdata/invalidsvc.yaml
E1114 13:43:25.920271 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-927562
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-927562: exit status 115 (645.447025ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31542 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-927562 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-927562 config get cpus: exit status 14 (111.022471ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-927562 config get cpus: exit status 14 (99.365152ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-927562 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-927562 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1283609: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.99s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-927562 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-927562 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (237.880135ms)

                                                
                                                
-- stdout --
	* [functional-927562] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:44:01.431634 1283172 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:44:01.431786 1283172 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:01.431796 1283172 out.go:309] Setting ErrFile to fd 2...
	I1114 13:44:01.431802 1283172 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:01.432082 1283172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
	I1114 13:44:01.432519 1283172 out.go:303] Setting JSON to false
	I1114 13:44:01.433785 1283172 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":37588,"bootTime":1699931854,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1114 13:44:01.433879 1283172 start.go:138] virtualization:  
	I1114 13:44:01.436283 1283172 out.go:177] * [functional-927562] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 13:44:01.442725 1283172 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:44:01.448930 1283172 notify.go:220] Checking for updates...
	I1114 13:44:01.452195 1283172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:44:01.454266 1283172 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	I1114 13:44:01.456155 1283172 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	I1114 13:44:01.458044 1283172 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 13:44:01.459805 1283172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:44:01.461958 1283172 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1114 13:44:01.462622 1283172 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:44:01.488961 1283172 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:44:01.489064 1283172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:44:01.575322 1283172 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:56 SystemTime:2023-11-14 13:44:01.564159408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:44:01.575451 1283172 docker.go:295] overlay module found
	I1114 13:44:01.577459 1283172 out.go:177] * Using the docker driver based on existing profile
	I1114 13:44:01.579338 1283172 start.go:298] selected driver: docker
	I1114 13:44:01.579363 1283172 start.go:902] validating driver "docker" against &{Name:functional-927562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-927562 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:44:01.579558 1283172 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:44:01.581996 1283172 out.go:177] 
	W1114 13:44:01.583908 1283172 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1114 13:44:01.585637 1283172 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-927562 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-927562 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-927562 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (246.391694ms)

                                                
                                                
-- stdout --
	* [functional-927562] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:44:01.181056 1283132 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:44:01.181290 1283132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:01.181301 1283132 out.go:309] Setting ErrFile to fd 2...
	I1114 13:44:01.181308 1283132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:44:01.181744 1283132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
	I1114 13:44:01.182298 1283132 out.go:303] Setting JSON to false
	I1114 13:44:01.183695 1283132 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":37588,"bootTime":1699931854,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1114 13:44:01.183795 1283132 start.go:138] virtualization:  
	I1114 13:44:01.188043 1283132 out.go:177] * [functional-927562] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1114 13:44:01.189927 1283132 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:44:01.191610 1283132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:44:01.190070 1283132 notify.go:220] Checking for updates...
	I1114 13:44:01.195848 1283132 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	I1114 13:44:01.197672 1283132 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	I1114 13:44:01.199699 1283132 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 13:44:01.201454 1283132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:44:01.204389 1283132 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1114 13:44:01.205423 1283132 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:44:01.233171 1283132 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:44:01.233291 1283132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:44:01.334312 1283132 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:56 SystemTime:2023-11-14 13:44:01.320150721 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:44:01.334448 1283132 docker.go:295] overlay module found
	I1114 13:44:01.336357 1283132 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1114 13:44:01.338379 1283132 start.go:298] selected driver: docker
	I1114 13:44:01.338405 1283132 start.go:902] validating driver "docker" against &{Name:functional-927562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-927562 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersi
on:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:44:01.338574 1283132 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:44:01.341279 1283132 out.go:177] 
	W1114 13:44:01.343370 1283132 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1114 13:44:01.345385 1283132 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-927562 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-927562 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-q8d7l" [5995ad87-96d7-4f77-9224-173bad00f477] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-q8d7l" [5995ad87-96d7-4f77-9224-173bad00f477] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.018502439s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30513
functional_test.go:1674: http://192.168.49.2:30513: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-q8d7l

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30513
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c32ce704-194f-45d4-8415-7661a2c0817e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.032056826s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-927562 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-927562 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-927562 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-927562 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fb5d5673-5bd0-4843-a22f-9044f4177dd7] Pending
helpers_test.go:344: "sp-pod" [fb5d5673-5bd0-4843-a22f-9044f4177dd7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fb5d5673-5bd0-4843-a22f-9044f4177dd7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.017355546s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-927562 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-927562 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-927562 delete -f testdata/storage-provisioner/pod.yaml: (1.335235159s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-927562 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4589a4de-e506-4acb-b301-0276fdd9e8cc] Pending
helpers_test.go:344: "sp-pod" [4589a4de-e506-4acb-b301-0276fdd9e8cc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4589a4de-e506-4acb-b301-0276fdd9e8cc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.023891426s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-927562 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh -n functional-927562 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 cp functional-927562:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2398866032/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh -n functional-927562 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1251905/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "sudo cat /etc/test/nested/copy/1251905/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1251905.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "sudo cat /etc/ssl/certs/1251905.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1251905.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "sudo cat /usr/share/ca-certificates/1251905.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/12519052.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "sudo cat /etc/ssl/certs/12519052.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/12519052.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "sudo cat /usr/share/ca-certificates/12519052.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-927562 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-927562 ssh "sudo systemctl is-active docker": exit status 1 (461.927289ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-927562 ssh "sudo systemctl is-active crio": exit status 1 (507.610541ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-927562 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-927562 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-927562 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1280870: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-927562 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-927562 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-927562 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [98bc3a15-d787-4134-9461-22cfb604af31] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [98bc3a15-d787-4134-9461-22cfb604af31] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.021507115s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-927562 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.68.148 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-927562 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-927562 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-927562 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-2zmvz" [e1d11e7d-539e-4f9e-9ea7-951460d2d9ce] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-2zmvz" [e1d11e7d-539e-4f9e-9ea7-951460d2d9ce] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.0171899s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "362.28168ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "79.762208ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "364.393508ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "76.195927ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-927562 /tmp/TestFunctionalparallelMountCmdany-port2378413860/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699969436210251550" to /tmp/TestFunctionalparallelMountCmdany-port2378413860/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699969436210251550" to /tmp/TestFunctionalparallelMountCmdany-port2378413860/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699969436210251550" to /tmp/TestFunctionalparallelMountCmdany-port2378413860/001/test-1699969436210251550
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-927562 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (406.014695ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 14 13:43 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 14 13:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 14 13:43 test-1699969436210251550
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh cat /mount-9p/test-1699969436210251550
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-927562 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7bc27c23-bd3a-47d3-bb9e-f0bcab765b6a] Pending
helpers_test.go:344: "busybox-mount" [7bc27c23-bd3a-47d3-bb9e-f0bcab765b6a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7bc27c23-bd3a-47d3-bb9e-f0bcab765b6a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7bc27c23-bd3a-47d3-bb9e-f0bcab765b6a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.022147371s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-927562 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-927562 /tmp/TestFunctionalparallelMountCmdany-port2378413860/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 service list -o json
functional_test.go:1493: Took "684.402061ms" to run "out/minikube-linux-arm64 -p functional-927562 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30689
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30689
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-927562 /tmp/TestFunctionalparallelMountCmdspecific-port3348034719/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-927562 /tmp/TestFunctionalparallelMountCmdspecific-port3348034719/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-927562 ssh "sudo umount -f /mount-9p": exit status 1 (472.042619ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-927562 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-927562 /tmp/TestFunctionalparallelMountCmdspecific-port3348034719/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-927562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1180448060/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-927562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1180448060/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-927562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1180448060/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-927562 ssh "findmnt -T" /mount1: exit status 1 (1.307179255s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-927562 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-927562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1180448060/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-927562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1180448060/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-927562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1180448060/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-927562 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-927562
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-927562 image ls --format short --alsologtostderr:
I1114 13:44:27.944488 1285753 out.go:296] Setting OutFile to fd 1 ...
I1114 13:44:27.944723 1285753 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:27.944747 1285753 out.go:309] Setting ErrFile to fd 2...
I1114 13:44:27.944770 1285753 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:27.945280 1285753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
I1114 13:44:27.946890 1285753 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1114 13:44:27.947512 1285753 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1114 13:44:27.948440 1285753 cli_runner.go:164] Run: docker container inspect functional-927562 --format={{.State.Status}}
I1114 13:44:27.981915 1285753 ssh_runner.go:195] Run: systemctl --version
I1114 13:44:27.981981 1285753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-927562
I1114 13:44:28.005605 1285753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34347 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/functional-927562/id_rsa Username:docker}
I1114 13:44:28.107339 1285753 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-927562 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-apiserver              | v1.28.3            | sha256:537e9a | 31.6MB |
| docker.io/library/nginx                     | latest             | sha256:81be38 | 67.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:04b4ea | 25.3MB |
| docker.io/library/nginx                     | alpine             | sha256:aae348 | 19.6MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:9cdd64 | 86.5MB |
| registry.k8s.io/kube-scheduler              | v1.28.3            | sha256:42a4e7 | 17.1MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-927562  | sha256:7b4f55 | 1.01kB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.28.3            | sha256:827643 | 30.3MB |
| registry.k8s.io/kube-proxy                  | v1.28.3            | sha256:a5dd5c | 22MB   |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:97e046 | 14.6MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-927562 image ls --format table --alsologtostderr:
I1114 13:44:28.666685 1285885 out.go:296] Setting OutFile to fd 1 ...
I1114 13:44:28.666882 1285885 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:28.666895 1285885 out.go:309] Setting ErrFile to fd 2...
I1114 13:44:28.666902 1285885 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:28.667282 1285885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
I1114 13:44:28.668027 1285885 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1114 13:44:28.668167 1285885 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1114 13:44:28.668900 1285885 cli_runner.go:164] Run: docker container inspect functional-927562 --format={{.State.Status}}
I1114 13:44:28.688676 1285885 ssh_runner.go:195] Run: systemctl --version
I1114 13:44:28.688737 1285885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-927562
I1114 13:44:28.713658 1285885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34347 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/functional-927562/id_rsa Username:docker}
I1114 13:44:28.815964 1285885 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-927562 image ls --format json --alsologtostderr:
[{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b","repoDigests":["docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19561536"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c43
2ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"14557471"},{"id":"sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"86464836"},{"id":"sha256:a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","repoDigests":["registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"21981421"},{"id":"sha256:42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"17063462"},{"id":"sha256:7b4f5
5c0d5a5190fee6e0f8a983e614d31752da101d897fd820b13da74393ee2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-927562"],"size":"1007"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"30344361"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:lat
est"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"31557550"},{"id":"sha256:81be38025439476d1b7303cb575df80e419fd1b3be4a639f3b3e51cf95720c7b","repoDigests":["docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6"],"repoTags":["docker.io/library/nginx:latest"],"size":"67241456"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.
io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"25324029"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-927562 image ls --format json --alsologtostderr:
I1114 13:44:28.321806 1285813 out.go:296] Setting OutFile to fd 1 ...
I1114 13:44:28.321997 1285813 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:28.322010 1285813 out.go:309] Setting ErrFile to fd 2...
I1114 13:44:28.322016 1285813 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:28.322332 1285813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
I1114 13:44:28.323118 1285813 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1114 13:44:28.323397 1285813 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1114 13:44:28.324097 1285813 cli_runner.go:164] Run: docker container inspect functional-927562 --format={{.State.Status}}
I1114 13:44:28.364218 1285813 ssh_runner.go:195] Run: systemctl --version
I1114 13:44:28.364279 1285813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-927562
I1114 13:44:28.400539 1285813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34347 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/functional-927562/id_rsa Username:docker}
I1114 13:44:28.516988 1285813 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-927562 image ls --format yaml --alsologtostderr:
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "25324029"
- id: sha256:9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "86464836"
- id: sha256:537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "31557550"
- id: sha256:8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "30344361"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:7b4f55c0d5a5190fee6e0f8a983e614d31752da101d897fd820b13da74393ee2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-927562
size: "1007"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "17063462"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b
repoDigests:
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "19561536"
- id: sha256:81be38025439476d1b7303cb575df80e419fd1b3be4a639f3b3e51cf95720c7b
repoDigests:
- docker.io/library/nginx@sha256:86e53c4c16a6a276b204b0fd3a8143d86547c967dc8258b3d47c3a21bb68d3c6
repoTags:
- docker.io/library/nginx:latest
size: "67241456"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "14557471"
- id: sha256:a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "21981421"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-927562 image ls --format yaml --alsologtostderr:
I1114 13:44:27.939794 1285754 out.go:296] Setting OutFile to fd 1 ...
I1114 13:44:27.939982 1285754 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:27.939994 1285754 out.go:309] Setting ErrFile to fd 2...
I1114 13:44:27.940001 1285754 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:27.940336 1285754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
I1114 13:44:27.941072 1285754 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1114 13:44:27.941256 1285754 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1114 13:44:27.941866 1285754 cli_runner.go:164] Run: docker container inspect functional-927562 --format={{.State.Status}}
I1114 13:44:27.984040 1285754 ssh_runner.go:195] Run: systemctl --version
I1114 13:44:27.984096 1285754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-927562
I1114 13:44:28.035603 1285754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34347 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/functional-927562/id_rsa Username:docker}
I1114 13:44:28.142680 1285754 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-927562 ssh pgrep buildkitd: exit status 1 (414.766261ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image build -t localhost/my-image:functional-927562 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-927562 image build -t localhost/my-image:functional-927562 testdata/build --alsologtostderr: (2.319007161s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-927562 image build -t localhost/my-image:functional-927562 testdata/build --alsologtostderr:
I1114 13:44:28.689444 1285889 out.go:296] Setting OutFile to fd 1 ...
I1114 13:44:28.692301 1285889 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:28.692370 1285889 out.go:309] Setting ErrFile to fd 2...
I1114 13:44:28.692391 1285889 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:44:28.693272 1285889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
I1114 13:44:28.694560 1285889 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1114 13:44:28.696720 1285889 config.go:182] Loaded profile config "functional-927562": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
I1114 13:44:28.697593 1285889 cli_runner.go:164] Run: docker container inspect functional-927562 --format={{.State.Status}}
I1114 13:44:28.729034 1285889 ssh_runner.go:195] Run: systemctl --version
I1114 13:44:28.729087 1285889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-927562
I1114 13:44:28.749448 1285889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34347 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/functional-927562/id_rsa Username:docker}
I1114 13:44:28.853264 1285889 build_images.go:151] Building image from path: /tmp/build.900606848.tar
I1114 13:44:28.853350 1285889 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1114 13:44:28.867718 1285889 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.900606848.tar
I1114 13:44:28.874283 1285889 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.900606848.tar: stat -c "%s %y" /var/lib/minikube/build/build.900606848.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.900606848.tar': No such file or directory
I1114 13:44:28.874308 1285889 ssh_runner.go:362] scp /tmp/build.900606848.tar --> /var/lib/minikube/build/build.900606848.tar (3072 bytes)
I1114 13:44:28.911370 1285889 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.900606848
I1114 13:44:28.923949 1285889 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.900606848 -xf /var/lib/minikube/build/build.900606848.tar
I1114 13:44:28.936354 1285889 containerd.go:378] Building image: /var/lib/minikube/build/build.900606848
I1114 13:44:28.936453 1285889 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.900606848 --local dockerfile=/var/lib/minikube/build/build.900606848 --output type=image,name=localhost/my-image:functional-927562
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:c0cd3e81b38a6d66e0fe5df1d8df23e1bf913ac7b5d86e4481cb204b74140b84 0.0s done
#8 exporting config sha256:2473f07e553bee43f5bff4d91557f58338abc3fc3c18e3b3935d84fdb1b153ad 0.0s done
#8 naming to localhost/my-image:functional-927562 done
#8 DONE 0.1s
I1114 13:44:30.875542 1285889 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.900606848 --local dockerfile=/var/lib/minikube/build/build.900606848 --output type=image,name=localhost/my-image:functional-927562: (1.939052814s)
I1114 13:44:30.875615 1285889 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.900606848
I1114 13:44:30.887275 1285889 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.900606848.tar
I1114 13:44:30.898867 1285889 build_images.go:207] Built localhost/my-image:functional-927562 from /tmp/build.900606848.tar
I1114 13:44:30.898902 1285889 build_images.go:123] succeeded building to: functional-927562
I1114 13:44:30.898908 1285889 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
2023/11/14 13:44:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.668705068s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-927562
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image rm gcr.io/google-containers/addon-resizer:functional-927562 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-927562
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-927562 image save --daemon gcr.io/google-containers/addon-resizer:functional-927562 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-927562
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-927562
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-927562
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-927562
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (94.15s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-011886 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1114 13:44:47.841175 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-011886 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m34.144954486s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (94.15s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-011886 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-011886 addons enable ingress --alsologtostderr -v=5: (8.568190169s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (8.57s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-011886 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (83.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-353123 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1114 13:47:31.681384 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:48:30.198003 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 13:48:30.203410 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 13:48:30.213852 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 13:48:30.234334 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 13:48:30.274643 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 13:48:30.354990 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 13:48:30.515425 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 13:48:30.835911 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 13:48:31.476828 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 13:48:32.757412 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 13:48:35.317642 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-353123 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m23.354877226s)
--- PASS: TestJSONOutput/start/Command (83.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-353123 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-353123 --output=json --user=testUser
E1114 13:48:40.438491 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
--- PASS: TestJSONOutput/unpause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-353123 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-353123 --output=json --user=testUser: (5.839929353s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-702447 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-702447 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.924172ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9cda3508-76dc-47de-8506-a3019697f7dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-702447] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1ad8e6d-ddf0-4a74-8cb4-ef9a8787e54e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17581"}}
	{"specversion":"1.0","id":"3d08e4d6-411c-4175-87d0-d1d4c2143399","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bf2f4405-0854-4d20-94bc-fdcaa92e4a5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig"}}
	{"specversion":"1.0","id":"1f3dffb2-b291-4b43-8735-544e84551c74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube"}}
	{"specversion":"1.0","id":"591197fa-8b4c-4e37-af47-a6d019099ef5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4ad4a91e-1407-4a1b-8873-eb2ffb71dcea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f57ec29-f2bc-40b3-9137-a0bf201d16b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-702447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-702447
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (54.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-385617 --network=
E1114 13:49:11.160003 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-385617 --network=: (52.460205799s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-385617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-385617
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-385617: (2.118317863s)
--- PASS: TestKicCustomNetwork/create_custom_network (54.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (37.16s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-593098 --network=bridge
E1114 13:49:52.120450 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-593098 --network=bridge: (35.037355842s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-593098" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-593098
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-593098: (2.093613699s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (37.16s)

                                                
                                    
x
+
TestKicExistingNetwork (35.45s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-911332 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-911332 --network=existing-network: (33.206066526s)
helpers_test.go:175: Cleaning up "existing-network-911332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-911332
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-911332: (2.071743839s)
--- PASS: TestKicExistingNetwork (35.45s)

                                                
                                    
x
+
TestKicCustomSubnet (38.23s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-243897 --subnet=192.168.60.0/24
E1114 13:51:14.046678 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 13:51:17.545666 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:17.550945 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:17.561227 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:17.581515 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:17.621810 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:17.702089 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:17.862422 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:18.182977 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:18.823993 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:20.104231 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:22.664902 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:27.785095 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-243897 --subnet=192.168.60.0/24: (36.040493415s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-243897 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-243897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-243897
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-243897: (2.158721523s)
--- PASS: TestKicCustomSubnet (38.23s)

                                                
                                    
x
+
TestKicStaticIP (37.78s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-629180 --static-ip=192.168.200.200
E1114 13:51:38.026021 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:51:58.506632 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:52:03.998109 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-629180 --static-ip=192.168.200.200: (35.397338442s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-629180 ip
helpers_test.go:175: Cleaning up "static-ip-629180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-629180
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-629180: (2.181483336s)
--- PASS: TestKicStaticIP (37.78s)

                                                
                                    
x
+
TestMainNoArgs (0.1s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.10s)

                                                
                                    
x
+
TestMinikubeProfile (74.16s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-581684 --driver=docker  --container-runtime=containerd
E1114 13:52:39.466898 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-581684 --driver=docker  --container-runtime=containerd: (35.050401213s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-588896 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-588896 --driver=docker  --container-runtime=containerd: (33.681946168s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-581684
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-588896
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-588896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-588896
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-588896: (2.031405976s)
helpers_test.go:175: Cleaning up "first-581684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-581684
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-581684: (2.020935217s)
--- PASS: TestMinikubeProfile (74.16s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-075491 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1114 13:53:30.197191 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-075491 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.227467924s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-075491 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-077271 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-077271 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.061168964s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-077271 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-075491 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-075491 --alsologtostderr -v=5: (1.701901865s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-077271 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-077271
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-077271: (1.232289003s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-077271
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-077271: (6.405929955s)
--- PASS: TestMountStart/serial/RestartStopped (7.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-077271 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-852984 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1114 13:54:01.387367 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-852984 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.782003687s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-852984 -- rollout status deployment/busybox: (2.627775378s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- exec busybox-5bc68d56bd-csqgj -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- exec busybox-5bc68d56bd-ftwlv -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- exec busybox-5bc68d56bd-csqgj -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- exec busybox-5bc68d56bd-ftwlv -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- exec busybox-5bc68d56bd-csqgj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- exec busybox-5bc68d56bd-ftwlv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.88s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- exec busybox-5bc68d56bd-csqgj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- exec busybox-5bc68d56bd-csqgj -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- exec busybox-5bc68d56bd-ftwlv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-852984 -- exec busybox-5bc68d56bd-ftwlv -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.29s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-852984 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-852984 -v 3 --alsologtostderr: (19.08980944s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.85s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp testdata/cp-test.txt multinode-852984:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp multinode-852984:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile99209166/001/cp-test_multinode-852984.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp multinode-852984:/home/docker/cp-test.txt multinode-852984-m02:/home/docker/cp-test_multinode-852984_multinode-852984-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m02 "sudo cat /home/docker/cp-test_multinode-852984_multinode-852984-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp multinode-852984:/home/docker/cp-test.txt multinode-852984-m03:/home/docker/cp-test_multinode-852984_multinode-852984-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984 "sudo cat /home/docker/cp-test.txt"
E1114 13:56:17.545518 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m03 "sudo cat /home/docker/cp-test_multinode-852984_multinode-852984-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp testdata/cp-test.txt multinode-852984-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp multinode-852984-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile99209166/001/cp-test_multinode-852984-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp multinode-852984-m02:/home/docker/cp-test.txt multinode-852984:/home/docker/cp-test_multinode-852984-m02_multinode-852984.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984 "sudo cat /home/docker/cp-test_multinode-852984-m02_multinode-852984.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp multinode-852984-m02:/home/docker/cp-test.txt multinode-852984-m03:/home/docker/cp-test_multinode-852984-m02_multinode-852984-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m03 "sudo cat /home/docker/cp-test_multinode-852984-m02_multinode-852984-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp testdata/cp-test.txt multinode-852984-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp multinode-852984-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile99209166/001/cp-test_multinode-852984-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp multinode-852984-m03:/home/docker/cp-test.txt multinode-852984:/home/docker/cp-test_multinode-852984-m03_multinode-852984.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984 "sudo cat /home/docker/cp-test_multinode-852984-m03_multinode-852984.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 cp multinode-852984-m03:/home/docker/cp-test.txt multinode-852984-m02:/home/docker/cp-test_multinode-852984-m03_multinode-852984-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 ssh -n multinode-852984-m02 "sudo cat /home/docker/cp-test_multinode-852984-m03_multinode-852984-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-852984 node stop m03: (1.272200418s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-852984 status: exit status 7 (616.187099ms)

                                                
                                                
-- stdout --
	multinode-852984
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-852984-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-852984-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-852984 status --alsologtostderr: exit status 7 (617.395266ms)

                                                
                                                
-- stdout --
	multinode-852984
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-852984-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-852984-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:56:27.499727 1333224 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:56:27.499972 1333224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:56:27.500000 1333224 out.go:309] Setting ErrFile to fd 2...
	I1114 13:56:27.500021 1333224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:56:27.500326 1333224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
	I1114 13:56:27.500598 1333224 out.go:303] Setting JSON to false
	I1114 13:56:27.500694 1333224 mustload.go:65] Loading cluster: multinode-852984
	I1114 13:56:27.500768 1333224 notify.go:220] Checking for updates...
	I1114 13:56:27.501239 1333224 config.go:182] Loaded profile config "multinode-852984": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1114 13:56:27.501279 1333224 status.go:255] checking status of multinode-852984 ...
	I1114 13:56:27.503212 1333224 cli_runner.go:164] Run: docker container inspect multinode-852984 --format={{.State.Status}}
	I1114 13:56:27.522892 1333224 status.go:330] multinode-852984 host status = "Running" (err=<nil>)
	I1114 13:56:27.522947 1333224 host.go:66] Checking if "multinode-852984" exists ...
	I1114 13:56:27.523252 1333224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-852984
	I1114 13:56:27.565682 1333224 host.go:66] Checking if "multinode-852984" exists ...
	I1114 13:56:27.565984 1333224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 13:56:27.566030 1333224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-852984
	I1114 13:56:27.586128 1333224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34412 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/multinode-852984/id_rsa Username:docker}
	I1114 13:56:27.684011 1333224 ssh_runner.go:195] Run: systemctl --version
	I1114 13:56:27.690047 1333224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:56:27.705223 1333224 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:56:27.784849 1333224 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:65 SystemTime:2023-11-14 13:56:27.772471553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:56:27.785488 1333224 kubeconfig.go:92] found "multinode-852984" server: "https://192.168.58.2:8443"
	I1114 13:56:27.785508 1333224 api_server.go:166] Checking apiserver status ...
	I1114 13:56:27.785555 1333224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 13:56:27.799783 1333224 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1324/cgroup
	I1114 13:56:27.813060 1333224 api_server.go:182] apiserver freezer: "2:freezer:/docker/b8a3758366449d87be71129694d2215fa80e78b59c67c48d68a8d8a15da4d870/kubepods/burstable/pod637fe3ca29d373b9a284c955d0ad4f38/344e943c6d1eb872d5e53adf288f4e842a8c34cb2afbe75b659b489f612050ca"
	I1114 13:56:27.813160 1333224 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b8a3758366449d87be71129694d2215fa80e78b59c67c48d68a8d8a15da4d870/kubepods/burstable/pod637fe3ca29d373b9a284c955d0ad4f38/344e943c6d1eb872d5e53adf288f4e842a8c34cb2afbe75b659b489f612050ca/freezer.state
	I1114 13:56:27.825016 1333224 api_server.go:204] freezer state: "THAWED"
	I1114 13:56:27.825050 1333224 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1114 13:56:27.834923 1333224 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1114 13:56:27.834964 1333224 status.go:421] multinode-852984 apiserver status = Running (err=<nil>)
	I1114 13:56:27.834998 1333224 status.go:257] multinode-852984 status: &{Name:multinode-852984 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1114 13:56:27.835041 1333224 status.go:255] checking status of multinode-852984-m02 ...
	I1114 13:56:27.835391 1333224 cli_runner.go:164] Run: docker container inspect multinode-852984-m02 --format={{.State.Status}}
	I1114 13:56:27.854749 1333224 status.go:330] multinode-852984-m02 host status = "Running" (err=<nil>)
	I1114 13:56:27.854777 1333224 host.go:66] Checking if "multinode-852984-m02" exists ...
	I1114 13:56:27.855081 1333224 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-852984-m02
	I1114 13:56:27.873851 1333224 host.go:66] Checking if "multinode-852984-m02" exists ...
	I1114 13:56:27.874175 1333224 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 13:56:27.874226 1333224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-852984-m02
	I1114 13:56:27.893218 1333224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34417 SSHKeyPath:/home/jenkins/minikube-integration/17581-1246551/.minikube/machines/multinode-852984-m02/id_rsa Username:docker}
	I1114 13:56:27.996372 1333224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:56:28.013421 1333224 status.go:257] multinode-852984-m02 status: &{Name:multinode-852984-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1114 13:56:28.013459 1333224 status.go:255] checking status of multinode-852984-m03 ...
	I1114 13:56:28.013812 1333224 cli_runner.go:164] Run: docker container inspect multinode-852984-m03 --format={{.State.Status}}
	I1114 13:56:28.034607 1333224 status.go:330] multinode-852984-m03 host status = "Stopped" (err=<nil>)
	I1114 13:56:28.034631 1333224 status.go:343] host is not running, skipping remaining checks
	I1114 13:56:28.034639 1333224 status.go:257] multinode-852984-m03 status: &{Name:multinode-852984-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-852984 node start m03 --alsologtostderr: (11.563631826s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (121.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-852984
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-852984
E1114 13:56:45.227851 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 13:57:03.997702 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-852984: (25.151445654s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-852984 --wait=true -v=8 --alsologtostderr
E1114 13:58:27.042392 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 13:58:30.197366 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-852984 --wait=true -v=8 --alsologtostderr: (1m35.803677515s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-852984
--- PASS: TestMultiNode/serial/RestartKeepsNodes (121.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-852984 node delete m03: (4.503715522s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-852984 stop: (24.11531546s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-852984 status: exit status 7 (119.220142ms)

                                                
                                                
-- stdout --
	multinode-852984
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-852984-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-852984 status --alsologtostderr: exit status 7 (111.763708ms)

                                                
                                                
-- stdout --
	multinode-852984
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-852984-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:59:11.241835 1341829 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:59:11.242029 1341829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:59:11.242038 1341829 out.go:309] Setting ErrFile to fd 2...
	I1114 13:59:11.242044 1341829 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:59:11.242297 1341829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
	I1114 13:59:11.242487 1341829 out.go:303] Setting JSON to false
	I1114 13:59:11.242540 1341829 mustload.go:65] Loading cluster: multinode-852984
	I1114 13:59:11.242636 1341829 notify.go:220] Checking for updates...
	I1114 13:59:11.242949 1341829 config.go:182] Loaded profile config "multinode-852984": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1114 13:59:11.242960 1341829 status.go:255] checking status of multinode-852984 ...
	I1114 13:59:11.243928 1341829 cli_runner.go:164] Run: docker container inspect multinode-852984 --format={{.State.Status}}
	I1114 13:59:11.263189 1341829 status.go:330] multinode-852984 host status = "Stopped" (err=<nil>)
	I1114 13:59:11.263208 1341829 status.go:343] host is not running, skipping remaining checks
	I1114 13:59:11.263225 1341829 status.go:257] multinode-852984 status: &{Name:multinode-852984 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1114 13:59:11.263265 1341829 status.go:255] checking status of multinode-852984-m02 ...
	I1114 13:59:11.263569 1341829 cli_runner.go:164] Run: docker container inspect multinode-852984-m02 --format={{.State.Status}}
	I1114 13:59:11.280992 1341829 status.go:330] multinode-852984-m02 host status = "Stopped" (err=<nil>)
	I1114 13:59:11.281018 1341829 status.go:343] host is not running, skipping remaining checks
	I1114 13:59:11.281026 1341829 status.go:257] multinode-852984-m02 status: &{Name:multinode-852984-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (88.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-852984 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-852984 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m27.511473381s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-852984 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (88.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-852984
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-852984-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-852984-m02 --driver=docker  --container-runtime=containerd: exit status 14 (95.108781ms)

                                                
                                                
-- stdout --
	* [multinode-852984-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-852984-m02' is duplicated with machine name 'multinode-852984-m02' in profile 'multinode-852984'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-852984-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-852984-m03 --driver=docker  --container-runtime=containerd: (35.035123062s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-852984
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-852984: exit status 80 (451.552052ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-852984
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-852984-m03 already exists in multinode-852984-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-852984-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-852984-m03: (2.040037977s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.69s)

                                                
                                    
x
+
TestPreload (176.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-624472 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1114 14:02:03.997814 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-624472 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m12.10874444s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-624472 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-624472 image pull gcr.io/k8s-minikube/busybox: (1.811522014s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-624472
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-624472: (12.05192351s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-624472 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1114 14:03:30.197263 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-624472 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m27.337819776s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-624472 image list
helpers_test.go:175: Cleaning up "test-preload-624472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-624472
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-624472: (2.557241766s)
--- PASS: TestPreload (176.13s)

                                                
                                    
x
+
TestScheduledStopUnix (107.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-958763 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-958763 --memory=2048 --driver=docker  --container-runtime=containerd: (30.63364865s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-958763 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-958763 -n scheduled-stop-958763
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-958763 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-958763 --cancel-scheduled
E1114 14:04:53.247579 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-958763 -n scheduled-stop-958763
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-958763
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-958763 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-958763
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-958763: exit status 7 (92.677468ms)

                                                
                                                
-- stdout --
	scheduled-stop-958763
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-958763 -n scheduled-stop-958763
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-958763 -n scheduled-stop-958763: exit status 7 (258.071699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-958763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-958763
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-958763: (4.753276658s)
--- PASS: TestScheduledStopUnix (107.45s)

                                                
                                    
x
+
TestInsufficientStorage (10.97s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-629992 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-629992 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.318108615s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cae5700c-bb1c-478a-bc4d-ad6c06d60859","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-629992] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e82ea74-7851-4c8b-9735-b19ed293380c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17581"}}
	{"specversion":"1.0","id":"1f06c0c0-b7e4-4627-a01e-b003a3cfb34f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"71daae9a-7e2c-49c4-a3b0-d653a5cc3795","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig"}}
	{"specversion":"1.0","id":"2c824445-42d8-4921-b82e-23150d92d97c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube"}}
	{"specversion":"1.0","id":"97011485-7ab1-46f7-a761-90a7ac313301","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"cecf5906-600b-49ea-8de0-3b0308bb5521","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"45e41fbc-952a-416e-aa1e-2422d2887b23","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8528d28c-54b9-4cf5-85f2-3c519359b139","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e5a04b25-6838-4658-b27a-6f8855f57c49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2bd382bc-64db-4f58-88c1-d09653f576cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"17bf4241-352e-46d3-8ca5-089af84de964","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-629992 in cluster insufficient-storage-629992","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3cc23be5-f63c-48d0-b1b5-e7e4ae8a634d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"08f05d46-3bf2-4510-b50b-1836069fa611","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"58fc7747-1e9a-4362-8744-6e6b3f1e2dc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-629992 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-629992 --output=json --layout=cluster: exit status 7 (322.507153ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-629992","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-629992","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 14:06:13.600413 1359266 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-629992" does not appear in /home/jenkins/minikube-integration/17581-1246551/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-629992 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-629992 --output=json --layout=cluster: exit status 7 (338.602398ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-629992","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-629992","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 14:06:13.940696 1359319 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-629992" does not appear in /home/jenkins/minikube-integration/17581-1246551/kubeconfig
	E1114 14:06:13.953606 1359319 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/insufficient-storage-629992/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-629992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-629992
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-629992: (1.992111111s)
--- PASS: TestInsufficientStorage (10.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (89.29s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.26.0.3950475928.exe start -p running-upgrade-564447 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1114 14:11:17.544923 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.26.0.3950475928.exe start -p running-upgrade-564447 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.981704251s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-564447 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1114 14:12:03.997433 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-564447 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.303431784s)
helpers_test.go:175: Cleaning up "running-upgrade-564447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-564447
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-564447: (3.450052927s)
--- PASS: TestRunningBinaryUpgrade (89.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (386.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642929 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1114 14:08:30.197852 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-642929 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m11.262736135s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-642929
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-642929: (1.557597276s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-642929 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-642929 status --format={{.Host}}: exit status 7 (143.363704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642929 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-642929 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m41.214497928s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-642929 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642929 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-642929 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (128.846513ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-642929] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-642929
	    minikube start -p kubernetes-upgrade-642929 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6429292 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-642929 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642929 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-642929 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (29.24196801s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-642929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-642929
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-642929: (2.653097739s)
--- PASS: TestKubernetesUpgrade (386.35s)

                                                
                                    
x
+
TestMissingContainerUpgrade (148.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.26.0.207868475.exe start -p missing-upgrade-651889 --memory=2200 --driver=docker  --container-runtime=containerd
E1114 14:06:17.546047 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.26.0.207868475.exe start -p missing-upgrade-651889 --memory=2200 --driver=docker  --container-runtime=containerd: (1m18.383349163s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-651889
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-651889
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-651889 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-651889 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.034198572s)
helpers_test.go:175: Cleaning up "missing-upgrade-651889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-651889
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-651889: (2.404188567s)
--- PASS: TestMissingContainerUpgrade (148.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-017607 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-017607 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (103.567379ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-017607] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-017607 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-017607 --driver=docker  --container-runtime=containerd: (40.685927786s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-017607 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-017607 --no-kubernetes --driver=docker  --container-runtime=containerd
E1114 14:07:03.998028 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-017607 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.596458092s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-017607 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-017607 status -o json: exit status 2 (441.370466ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-017607","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-017607
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-017607: (2.127282296s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-017607 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-017607 --no-kubernetes --driver=docker  --container-runtime=containerd: (10.164589947s)
--- PASS: TestNoKubernetes/serial/Start (10.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-017607 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-017607 "sudo systemctl is-active --quiet service kubelet": exit status 1 (551.890589ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-arm64 profile list: (6.098747674s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (6.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-017607
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-017607: (1.27829764s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-017607 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-017607 --driver=docker  --container-runtime=containerd: (7.238056117s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-017607 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-017607 "sudo systemctl is-active --quiet service kubelet": exit status 1 (308.647644ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (116.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.26.0.4063071874.exe start -p stopped-upgrade-300507 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.26.0.4063071874.exe start -p stopped-upgrade-300507 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (52.506218611s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.26.0.4063071874.exe -p stopped-upgrade-300507 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.26.0.4063071874.exe -p stopped-upgrade-300507 stop: (20.121496917s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-300507 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-300507 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (44.084370487s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (116.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-300507
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-300507: (1.138062478s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestPause/serial/Start (64.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-367070 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-367070 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m4.355760969s)
--- PASS: TestPause/serial/Start (64.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.26s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-367070 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-367070 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.234798585s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.26s)

                                                
                                    
x
+
TestPause/serial/Pause (1.26s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-367070 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-367070 --alsologtostderr -v=5: (1.257314747s)
--- PASS: TestPause/serial/Pause (1.26s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.59s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-367070 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-367070 --output=json --layout=cluster: exit status 2 (588.433998ms)

                                                
                                                
-- stdout --
	{"Name":"pause-367070","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-367070","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.59s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.08s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-367070 --alsologtostderr -v=5
E1114 14:13:30.197548 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-367070 --alsologtostderr -v=5: (1.081937592s)
--- PASS: TestPause/serial/Unpause (1.08s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.32s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-367070 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-367070 --alsologtostderr -v=5: (1.324174344s)
--- PASS: TestPause/serial/PauseAgain (1.32s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-367070 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-367070 --alsologtostderr -v=5: (3.094908281s)
--- PASS: TestPause/serial/DeletePaused (3.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.9s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-367070
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-367070: exit status 1 (26.101957ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-367070: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-983191 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-983191 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (323.372433ms)

                                                
                                                
-- stdout --
	* [false-983191] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:14:14.715411 1397641 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:14:14.715675 1397641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:14:14.715683 1397641 out.go:309] Setting ErrFile to fd 2...
	I1114 14:14:14.715690 1397641 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:14:14.715999 1397641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1246551/.minikube/bin
	I1114 14:14:14.716539 1397641 out.go:303] Setting JSON to false
	I1114 14:14:14.718032 1397641 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":39401,"bootTime":1699931854,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1114 14:14:14.718107 1397641 start.go:138] virtualization:  
	I1114 14:14:14.723395 1397641 out.go:177] * [false-983191] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 14:14:14.725233 1397641 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 14:14:14.725272 1397641 notify.go:220] Checking for updates...
	I1114 14:14:14.728033 1397641 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:14:14.730193 1397641 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1246551/kubeconfig
	I1114 14:14:14.732059 1397641 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1246551/.minikube
	I1114 14:14:14.734209 1397641 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 14:14:14.736001 1397641 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:14:14.738281 1397641 config.go:182] Loaded profile config "force-systemd-flag-713691": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.3
	I1114 14:14:14.738450 1397641 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:14:14.799850 1397641 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 14:14:14.799944 1397641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 14:14:14.933888 1397641 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-14 14:14:14.92149476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 14:14:14.934020 1397641 docker.go:295] overlay module found
	I1114 14:14:14.936038 1397641 out.go:177] * Using the docker driver based on user configuration
	I1114 14:14:14.937625 1397641 start.go:298] selected driver: docker
	I1114 14:14:14.937647 1397641 start.go:902] validating driver "docker" against <nil>
	I1114 14:14:14.937662 1397641 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:14:14.939995 1397641 out.go:177] 
	W1114 14:14:14.941846 1397641 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1114 14:14:14.943695 1397641 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-983191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-983191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-983191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17581-1246551/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 14 Nov 2023 14:14:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-713691
contexts:
- context:
cluster: force-systemd-flag-713691
extensions:
- extension:
last-update: Tue, 14 Nov 2023 14:14:15 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-flag-713691
name: force-systemd-flag-713691
current-context: force-systemd-flag-713691
kind: Config
preferences: {}
users:
- name: force-systemd-flag-713691
user:
client-certificate: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/force-systemd-flag-713691/client.crt
client-key: /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/force-systemd-flag-713691/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-983191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-983191"

                                                
                                                
----------------------- debugLogs end: false-983191 [took: 5.806956256s] --------------------------------
helpers_test.go:175: Cleaning up "false-983191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-983191
--- PASS: TestNetworkPlugins/group/false (6.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-029919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E1114 14:16:17.544961 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 14:17:03.997431 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-029919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m7.371082093s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-029919 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [547316cc-693c-424c-8013-d6c82e6a2e16] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [547316cc-693c-424c-8013-d6c82e6a2e16] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.037527816s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-029919 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-029919 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-029919 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-029919 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-029919 --alsologtostderr -v=3: (12.496288819s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-029919 -n old-k8s-version-029919
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-029919 -n old-k8s-version-029919: exit status 7 (136.471713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-029919 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (667.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-029919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-029919 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: (11m6.749328333s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-029919 -n old-k8s-version-029919
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (667.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-938012 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1114 14:18:30.197350 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-938012 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m19.96600621s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-938012 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e87f5354-d715-403d-98ac-1af5a7263562] Pending
helpers_test.go:344: "busybox" [e87f5354-d715-403d-98ac-1af5a7263562] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e87f5354-d715-403d-98ac-1af5a7263562] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.036129362s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-938012 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-938012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-938012 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.11870001s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-938012 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-938012 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-938012 --alsologtostderr -v=3: (12.208598558s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-938012 -n no-preload-938012
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-938012 -n no-preload-938012: exit status 7 (107.865484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-938012 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (337.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-938012 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1114 14:21:17.545292 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 14:21:33.247859 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 14:22:03.997479 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 14:23:30.200431 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
E1114 14:24:20.589132 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-938012 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m36.824955083s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-938012 -n no-preload-938012
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (337.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hjlp4" [5ef1dc55-963e-41cb-ba7d-4e61c211fe97] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hjlp4" [5ef1dc55-963e-41cb-ba7d-4e61c211fe97] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.026197227s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-hjlp4" [5ef1dc55-963e-41cb-ba7d-4e61c211fe97] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011520685s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-938012 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-938012 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-938012 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-938012 -n no-preload-938012
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-938012 -n no-preload-938012: exit status 2 (371.084921ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-938012 -n no-preload-938012
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-938012 -n no-preload-938012: exit status 2 (366.822445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-938012 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-938012 -n no-preload-938012
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-938012 -n no-preload-938012
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (61.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-110602 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1114 14:26:17.545090 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 14:27:03.997879 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-110602 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m1.492355441s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (61.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-110602 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2c99b4c6-9ec3-491c-b20c-abd5500e8736] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2c99b4c6-9ec3-491c-b20c-abd5500e8736] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.035286809s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-110602 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-110602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-110602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.173229712s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-110602 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-110602 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-110602 --alsologtostderr -v=3: (12.20350644s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-110602 -n embed-certs-110602
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-110602 -n embed-certs-110602: exit status 7 (91.827214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-110602 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (335.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-110602 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1114 14:28:30.197198 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-110602 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m34.733835213s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-110602 -n embed-certs-110602
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (335.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-bg2zk" [12a61b25-5278-4737-bf4d-de531ae28153] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023929312s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-bg2zk" [12a61b25-5278-4737-bf4d-de531ae28153] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010467634s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-029919 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-029919 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-029919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-029919 -n old-k8s-version-029919
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-029919 -n old-k8s-version-029919: exit status 2 (443.118218ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-029919 -n old-k8s-version-029919
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-029919 -n old-k8s-version-029919: exit status 2 (400.295925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-029919 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-029919 -n old-k8s-version-029919
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-029919 -n old-k8s-version-029919
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-765778 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1114 14:29:40.424812 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:29:40.430113 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:29:40.440401 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:29:40.460774 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:29:40.501127 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:29:40.581387 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:29:40.741801 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:29:41.213480 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:29:41.854027 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:29:43.134229 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:29:45.694943 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:29:50.815086 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:30:01.055459 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:30:21.536012 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-765778 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (1m3.017263178s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (63.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-765778 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8970a97a-f5dd-405e-97d7-ab781186b51b] Pending
helpers_test.go:344: "busybox" [8970a97a-f5dd-405e-97d7-ab781186b51b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8970a97a-f5dd-405e-97d7-ab781186b51b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.038765109s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-765778 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-765778 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-765778 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.181118533s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-765778 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-765778 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-765778 --alsologtostderr -v=3: (12.223278687s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-765778 -n default-k8s-diff-port-765778
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-765778 -n default-k8s-diff-port-765778: exit status 7 (187.31055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-765778 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (342.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-765778 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1114 14:31:02.496230 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:31:17.545805 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/ingress-addon-legacy-011886/client.crt: no such file or directory
E1114 14:31:47.043731 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 14:32:03.997695 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/addons-135796/client.crt: no such file or directory
E1114 14:32:24.417104 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:32:49.217113 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:32:49.222571 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:32:49.232899 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:32:49.253219 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:32:49.293445 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:32:49.373718 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:32:49.534540 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:32:49.855059 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:32:50.495323 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:32:51.775980 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:32:54.336962 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:32:59.458126 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-765778 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (5m41.424151131s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-765778 -n default-k8s-diff-port-765778
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (342.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rx88h" [49662e62-1f4b-418f-8c56-73da07710e37] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rx88h" [49662e62-1f4b-418f-8c56-73da07710e37] Running
E1114 14:33:09.699170 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.02640733s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rx88h" [49662e62-1f4b-418f-8c56-73da07710e37] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011668676s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-110602 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-110602 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-110602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-110602 -n embed-certs-110602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-110602 -n embed-certs-110602: exit status 2 (404.784282ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-110602 -n embed-certs-110602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-110602 -n embed-certs-110602: exit status 2 (389.597683ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-110602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-110602 -n embed-certs-110602
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-110602 -n embed-certs-110602
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-147621 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1114 14:33:30.180364 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:33:30.197719 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-147621 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (43.348470111s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-147621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-147621 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.454235597s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-147621 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-147621 --alsologtostderr -v=3: (1.294167378s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-147621 -n newest-cni-147621
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-147621 -n newest-cni-147621: exit status 7 (96.261808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-147621 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-147621 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3
E1114 14:34:11.141326 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
E1114 14:34:40.425617 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-147621 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.3: (31.978740757s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-147621 -n newest-cni-147621
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-147621 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-147621 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-147621 -n newest-cni-147621
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-147621 -n newest-cni-147621: exit status 2 (371.769788ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-147621 -n newest-cni-147621
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-147621 -n newest-cni-147621: exit status 2 (583.704181ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-147621 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-147621 -n newest-cni-147621
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-147621 -n newest-cni-147621
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1114 14:35:08.257635 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
E1114 14:35:33.061579 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m0.808093845s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-983191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-983191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v5fp8" [ce11188c-8096-4805-8721-337ddae12b02] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v5fp8" [ce11188c-8096-4805-8721-337ddae12b02] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.013477465s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-983191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m34.264720959s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kvlcb" [6e4eba4a-0404-405a-8851-6532249a8f98] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kvlcb" [6e4eba4a-0404-405a-8851-6532249a8f98] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.033970075s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kvlcb" [6e4eba4a-0404-405a-8851-6532249a8f98] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012435197s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-765778 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-765778 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-765778 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-765778 --alsologtostderr -v=1: (1.449533234s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-765778 -n default-k8s-diff-port-765778
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-765778 -n default-k8s-diff-port-765778: exit status 2 (622.871547ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-765778 -n default-k8s-diff-port-765778
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-765778 -n default-k8s-diff-port-765778: exit status 2 (579.696916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-765778 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-765778 --alsologtostderr -v=1: (1.354764439s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-765778 -n default-k8s-diff-port-765778
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-765778 -n default-k8s-diff-port-765778
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.41s)
E1114 14:42:13.181029 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/auto-983191/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1114 14:37:49.217735 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m11.774122807s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wdgpc" [b6527490-75f9-430b-a80b-8ec796965c0e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.033398418s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-983191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-983191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t9vbv" [c8b441c5-880e-4767-938f-aba0ea4021dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1114 14:38:13.249017 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-t9vbv" [c8b441c5-880e-4767-938f-aba0ea4021dd] Running
E1114 14:38:16.902342 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.01468748s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-pbdc4" [b02f4c25-ae79-4a12-9faa-caf797946caa] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.041723413s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-983191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-983191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-983191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7fnr5" [834e52f7-b384-4e64-9c89-ab29eb616181] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7fnr5" [834e52f7-b384-4e64-9c89-ab29eb616181] Running
E1114 14:38:30.197994 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/functional-927562/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.014433283s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-983191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (66.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m6.614582008s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (66.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1114 14:39:40.424738 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/no-preload-938012/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m33.877021952s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-983191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-983191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gfm8l" [deaf5e90-c18d-406a-8146-5f3561ffca10] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gfm8l" [deaf5e90-c18d-406a-8146-5f3561ffca10] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.025089865s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-983191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.167573173s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-983191 "pgrep -a kubelet"
E1114 14:40:39.215053 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
E1114 14:40:39.220185 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
E1114 14:40:39.230461 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
E1114 14:40:39.251438 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
E1114 14:40:39.291969 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
E1114 14:40:39.372260 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-983191 replace --force -f testdata/netcat-deployment.yaml
E1114 14:40:39.532382 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
E1114 14:40:39.853397 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xhlbj" [be563c66-b826-42f7-9c7a-fd53c0f6bfea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1114 14:40:40.494226 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
E1114 14:40:41.774447 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
E1114 14:40:44.334703 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-xhlbj" [be563c66-b826-42f7-9c7a-fd53c0f6bfea] Running
E1114 14:40:49.454953 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.011937828s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-983191 exec deployment/netcat -- nslookup kubernetes.default
E1114 14:40:51.257275 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/auto-983191/client.crt: no such file or directory
E1114 14:40:51.262509 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/auto-983191/client.crt: no such file or directory
E1114 14:40:51.272776 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/auto-983191/client.crt: no such file or directory
E1114 14:40:51.293450 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/auto-983191/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1114 14:40:51.333603 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/auto-983191/client.crt: no such file or directory
E1114 14:40:51.413848 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/auto-983191/client.crt: no such file or directory
E1114 14:40:51.574026 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/auto-983191/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1114 14:41:20.176483 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/default-k8s-diff-port-765778/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-983191 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m29.155095454s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-sw2b8" [cdb926b8-ec04-4d9f-ada3-f5cd395d9e52] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.026699503s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-983191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-983191 replace --force -f testdata/netcat-deployment.yaml
E1114 14:41:32.220354 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/auto-983191/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lkx92" [66687b34-4c00-4e28-89e8-cd3341525fba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lkx92" [66687b34-4c00-4e28-89e8-cd3341525fba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.025319089s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-983191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-983191 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-983191 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qjx4x" [609f60c8-dd5d-4040-9d37-13e6030f4612] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1114 14:42:49.217548 1251905 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1246551/.minikube/profiles/old-k8s-version-029919/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-qjx4x" [609f60c8-dd5d-4040-9d37-13e6030f4612] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.011607378s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-983191 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-983191 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    

Test skip (28/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.66s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-939008 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-939008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-939008
--- SKIP: TestDownloadOnlyKic (0.66s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-349462" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-349462
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-983191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-983191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-983191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-983191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-983191"

                                                
                                                
----------------------- debugLogs end: kubenet-983191 [took: 5.731826722s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-983191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-983191
--- SKIP: TestNetworkPlugins/group/kubenet (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (7.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-983191 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-983191" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-983191

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-983191" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-983191"

                                                
                                                
----------------------- debugLogs end: cilium-983191 [took: 6.603201884s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-983191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-983191
--- SKIP: TestNetworkPlugins/group/cilium (7.09s)

                                                
                                    
Copied to clipboard